Summaries of the Issue


The subject area of this research is optical pulling forces as one of the manifestations of light mechanical action on material objects. In particular, we investigated optical forces acting on a dimer composed of nanoparticles with a small radius as compared to wavelength. The calculation of Lorentz optical forces was carried out by solving self-consistent system of equations, which made it possible to calculate electromagnetic fields in every point of the structure. We worked out analytic formula, representing the dependence of optical force on the parameters of dimer system and structured radiation made up of two crossing plane waves. For the first time we showed that dimer consisting of two equal dipolar particles can experience an optical pulling force (“negative radiation pressure”) in the field of two crossing plane waves. It is shown that the increase of photons momentum (the projection of photons momentum on the direction of structured light propagation) after scattering is responsible for this negative radiation pressure. The corresponding scattering diagram showed the increase of forward scattering, that is the conformation of the considered mechanism of pulling forces origination. Our findings would be very useful for increasing capabilities of optical manipulation of nano- and micro-particles. 


The paper deals with the influence of diffractive optical elements on the optical aberrations. The correction of optical aberrations was investigated in the simple optical systems with one and two lenses (singlet and doublet). The advantages of diffractive optical elements are their ability to generate arbitrary complex wave fronts from a piece of optical material that is essentially flat. The optical systems consisting of the standard surfaces were designed and optimized by using the same starting points. Further, the diffractive and aspheric surfaces were introduced into the developed systems. The resulting hybrid systems were optimized. To compare the complicity of the development of narrow field systems and wide field optical systems, the optimization has been done separately for these two types of the instruments. The optical systems were designed by using special Optical Design Software. Тhe characteristics of designed diffractive surfaces were controlled in Software DIFSYS 2.30. Due to the application of diffractive optical elements the longitudinal chromatic aberration was 5 times reduced for the narrow field systems. The absolute value of Seidel coefficient related to the spherical aberration was reduced in the range of 0.03. Considering that diffractive optical elements have the known disadvantages, like possible parasitic diffraction orders and probable decrease of the transmission, we also developed and analyzed the optical systems with combined aspheric and diffractive surfaces. A combination of the aspheric and diffractive surfaces in the optical disk system of the disk reading lens, gave cutting down of the longitudinal color aberrations almost 15 times on-axis, comparing to the lens consisting of the aspherical and standard surfaces. All of the designed diffractive optical elements possess the parameters within the fabrication limits.
Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.
Scope of research. The paper deals with two approaches to the stars identification: an algorithm of similar triangles and an algorithm of interstellar angular distances. Method. Comparative analysis of the considered algorithms is performed using experimental data obtained by the prototype of zenith telescope as applied to the problem of coordinates determination by automated zenith telescope. Main results. The analysis has revealed that identification method based on the interstellar angular distances provides star identification with higher reliability and several times faster than the algorithm of similar triangles. However, the algorithm of interstellar angular distances is sensitive to the lens focal length, so a combined stars identification method is proposed. The idea of this method is to integrate the two above algorithms in order to calculate the lens focal length and to identify directly the stars. Practical significance. The combined method gives the possibility for valid identification of the stars visible in the field of view with comparatively short processing time whether the lens focal length is available or not.
TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS Lazarenko Vasiliy P., Djamiykov Todor S., Korotaev Valery Viktorovich, Yaryshev Sergey Nikolaevich
Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses. Experimental results are presented. 


ADAPTIVE FLUX OBSERVER FOR PERMANENT MAGNET SYNCHRONOUS MOTORS Bobtsov Alexey Alexeevich, Pyrkin Anton Alexandrovich, Ortega Romeo
The paper deals with the observer design problem for a flux in permanent magnet synchronous motors. It is assumed that some electrical parameters such as resistance and inductance are known numbers. But the flux, the angle and the speed of the rotor are unmeasurable. The new robust approach to design an adaptive flux observer is proposed that guarantees globally boundedness of all signals and, moreover, exponential convergence to zero of observer error between the true flux value and an estimate obtained from the adaptive observer. The problem of an adaptive flux observer design has been solved with using the trigonometrical properties and linear filtering which ensures cancellation of unknown terms arisen after mathematical calculations. The key idea is the new parameterization of the dynamical model containing unknown parameters and depending on measurable current and voltage in the motor. By applying the Pythagorean trigonometric identity the linear equation has found that does not contain any functions depending on angle or angular velocity of the rotor. Using dynamical first-order filters the standard regression model is obtained that consists of unknown constant parameters and measurable functions of time. Then the gradient-like estimator is designed to reconstruct unknown parameters, and it guarantees boundedness of all signals in the system. The proposition is proved that if the regressor satisfies the persistent excitation condition, meaning the “frequency-rich” signal, then all errors in observer exponentially converges to zero. It is shown that observer error for the flux explicitly depends on estimator errors. Exponential convergence of parameter estimation errors to zero yields exponential convergence of the flux observer error to zero. The numerical example is considered. 


STUDY OF MECHANISMS RESPONSIBLE FOR THE EFFICIENCY DEGRADATION OF THE III-NITRIDES LIGHT EMITTING DIODES Shmidt Natalia M., Usikov Alexander S., Shabunina Evgeniya I., Chernyakov Anton E., Kurin Sergei Yu., Yuri N. Makarov, Helava Heikki I., Papchenko Boris P.
The results for external quantum efficiency degradation of two types of light emitting diodes based on III-nitrides: blue and ultraviolet ones are presented. Existing mechanisms proposed for the degradation are considered briefly. Applying several techniques for studying the light emitting diodes at various stages of the aging test gives the possibility to reveal a new mechanism of defects formations with a help of multi-phonon recombination of carriers in an extended defects system and in local regions of random alloy fluctuations. These techniques include analysis of current voltage characteristics evolution at V<2V, the low frequency noise methods, and infrared microscopy. The multi-phonon recombination of carriers is accomplished by generation of native defects, in particular, In- or Ga-atoms and their migration. These processes lead to modification of the extended defects system properties and local composition of InGaN alloys in several regions that result in decreasing of the carriers participating in a radiative recombination and degradation of the external quantum efficiency. It was demonstrated that this mechanism of the defects formation can be responsible for the degradation of the blue and ultraviolet light emitting diodes. The mechanism can explain non monotonic dependence of the degradation process during the aging test, catastrophic failures of the light emitting diodes and low lifetime of the ultraviolet light emitting diodes. 
FORMATION OF LUMINESCENT OPTICAL WAVEGUIDES IN SILICATE GLASS MATRIX BY THE ION-EXCHANGE TECHNIQUE Dyomichev Ivan Alexeevich, Sidorov Alexander Ivanvich, Nikonorov Nikolay V. , Shakhverdov Teimur Azimoich
We present spectra of the alkali-silicate glasses with copper ions in near-surface area, introduced by ion exchange of different temperature and duration. It is shown that the reduction of Cu2+ in the near-surface area causes existence of Cu+ and neutral atoms in glass after the ion-exchange in divalent salt. The ion-exchange itself involves only Cu+ and Na+ ions. The formation of subnanometer clusters Cun is due to neutral copper atoms staying in near-surface zone. We have shown that the waveguide layer in near-surface area, made by ion-exchange, has а visible luminescence with the excitation by UVradiation. At the same time, the contribution to luminescence is made by Cu+ ions, molecular clusters Cun and by dimers Cu+ - Cu+ . During the high-temperature ion-exchange at 600 °С the formation and destruction equilibrium shift of molecular clusters Cun can be seen. An hour ion-exchange leads to molecular clusters Cun destruction, while at time periods less than 30 min and around 18 hours it leads to the formation of Cun. The sample turns green after 18,5 hours ion-exchange showing formation of a considerable amount of divalent copper ions Cu2+ therein.
INVESTIGATION OF HETEROSTRUCTURES 3C-SIC/15R-SIC Lebedev Sergei P., Lebedev Alexander A, Nikitina Irina P., Shkoldin Vitaliy A., Shustov Denis B.
The subject of study. Investigation results for 3C-SiC layers, obtained on single-crystal 15R-SiC substrates by sublimation epitaxy in vacuum are presented. Materials and methods. 15R polytype Lely crystals were used as a substrate; the growth was carried out on polar С (000 )1 and Si (0001) substrate faces. The growth temperature was 1950-2000 °C, and growth time was equal to 10 min. Commercial silicon carbide powder with a grain diameter equal to 10-20 µm was used as a growth source. The following methods were applied for the characterization of grown epitaxial layers: cathodoluminescence, optical microscopy and two-crystal X-ray diffraction. Main results. The possibility of obtaining epitaxial 3C-SiC on 15R-SiC substrate by sublimation epitaxy in vacuum was demonstrated. It is shown that, C-face is preferable for heteropolytype growth, since more uniform growth of cubic polytype is observed on it with a small percentage of spurious substrate polytype inclusions; the same situation appears in the case of 6H-SiC substrate application. Practical significance. Comparison of the results of heteropolytype growth for 3C-SiC on substrates of other polytypes (6HSiC, 15R-SiC, 4H-SiC) will give the possibility to understand more completely the transformation mechanism of the crystal lattice during epitaxial growth and to develop a theoretical model of the process.
We have investigated the back reflections influence on the spectrum for optical radiation source of superluminescent diode type and have provided optimal operating conditions of the radiation source. The feature of the research method is the usage of a fiber polarization controller and an optical mirror coated on the end of an optical fiber. The studies were conducted with two sources of optical radiation: ThorLabs superluminescent diode series S5FC1005SXL and LED module ELED-1550-1-E-9-SM1-FA-CW. It was revealed that at the value of back reflections equal to -13 dB relative to the output power source, a negative impact on power and spectral characteristics of the source with an optical power of 2.3 µW is beginning to appear. It was also confirmed that at the increase of the radiation power by increasing the source pumping current, back reflection influence is exhibiting at a lower level of back reflections. The results obtained need to be considered when designing fiber optic sensors in order to eliminate the effect of back reflections on the sources of optical radiation having been studied in this paper.


Subject of research. The paper presents a semi-automatic method of speaker identification based on prosodic features comparison - statistics of phone lengths. Due to the development of speech technologies in recent times, there is an increased interest in searching of expert methods for speaker's voice identification, which supplement existing methods to increase identification reliability and also have low labour intensity. An efficient solution for this problem is necessary for making the reliable decision whether the voices of the speakers in the audio recordings are identical or different. Method description. We present a novel algorithm for calculating the difference of speakers’ voices based on comparing of statistics for phone and allophone lengths. Characteristic feature of the proposed method is the possibility of its application along with the other semi-automatic methods (acoustic, auditive and linguistic) due to the lack of a strong correlation between analyzed features. The advantage of the method is the possibility to carry out rapid analysis of long-duration recordings because of preprocessing automation for data being analyzed. We describe the operation principles of an automatic speech segmentation module used for statistics calculation of sound lengths by acoustic-phonetic labeling. The software has been developed as an instrument of speech data preprocessing for expert analysis. Method approbation. This method was approved on the speech database of 130 speech records, including the Russian speech of the male speakers and female speakers, and showed reliability equal to 71.7% on the database containing female speech records, and 78.4% on the database containing male speech records. Also it was experimentally established that the most informative of all used features are statistics of phone lengths of vowels and sonorant sounds. Practical relevance. Experimental results have shown applicability of the proposed method for the speaker recognition task in the course of phonoscopic examination. 
We consider a task of local posteriori inference description by means of matrix-vector equations in algebraical Bayesian networks that represent a class of probabilistic graphical models. Such equations were generally presented in previous publications, however containing normalizing factors that were provided with algorithmic descriptions of their calculations instead of the desired matrix-vector interpretation. To eliminate this gap, the normalized factors were firstly represented as scalar products. Then, it was successfully shown that one of the components in each scalar product can be expressed as a Kronecker degree of a constant two-dimensional vector. Later on, non-normalized posteriori inference matrixoperator transplantation and further transfer within each scalar product yielded a representation of one of the scalar product components as a sequence of tensor products of two-dimensional vectors. The latter vectors have only two possible values in one case and three values in the other. The choice among those values is determined by the structure of input evidence. The second component of each scalar products is the vector with original data. The calculations performed gave the possibility for constructing corresponding vectors; the paper contains a table with proper examples for some of them. Local posteriori inference representation for matrix-vector equations simplify the development of local posteriori inference algorithms, their verification and further implementation based on available libraries. These equations also give the possibility for application of classical mathematical techniques to the obtained results analysis. Finally, the results obtained make it possible to apply the method of postponed calculations. This method helps avoiding construction of big-size vectors; instead, the vectors components can be calculated just in time they are needed by means of bitwise operations. 
The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination), methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be widely used in video processing tasks, various codecs of video compression and streaming systems.
ALGORITHM OF RATIONAL PROCESSOR ARCHITECTURE Zykov Nikolai A., Shamenkov Nikolai A., Karytko Anatoliy A.
The paper deals with an algorithm that makes it possible to decide on processor architecture for computational kernel. This architecture provides the maximum possible rate of the computational process. The algorithm is based on a sliding window method applied to bottlenecks - fragments of program code taking a maximum percentage of time for execution. The algorithm calculates a rational number of arithmetic-logic processor core computing channels depending on the type of supported operations. Calculation of the rational number for arithmetic and logical channels of processor architecture is performed on the code example that implements the algorithm for calculating the tesseral harmonics of the Earth gravitational field. Arithmetic operations of integer and real addition (subtraction), real multiplication, as well as the operations of calculating the values of logical predicates, were considered in the example. Calculation results revealed that for considered example, rational variant of processor architecture should include two arithmetic logic channels capable of performing these operations. The developed algorithm is feasible for application in solving the synthesis tasks for processor architectures and computing systems based on them. Maximum effect after using the algorithm results is achieved at the synthesis of computing systems that perform tasks on the basis of a consistent mathematical tool.
The paper deals with study of the load distribution effect between computing nodes for simulation run-time of a distributed computer network model. Two main types of balancing are considered: computational load balancing and reduction of transmitted amounts of data. Simulation was performed on one computer, and distribution was carried out between the cores of one processor. Simulation experiments showed that the lack of balance in the amounts of data, transferred between parts of a distributed model, leads to decrease of the simulation speed by several times due to overhead charges for data transmission between logical processes because of MPI usage. Model run-time changing at uneven distribution of computational load depends to a large extent on the load, which is created by the applications running on the simulated network nodes. It is shown that a balanced model is performed much faster than unbalanced version even when using applications that do not require significant computing resources. Simulation time reduction can be achieved by model separation in such manner as to reduce amounts of data transferred between its parts and reduce variability of loads generated by applications in different logical processes.
Methods based on genetic programming for the problem solution of integer sequences extrapolation are the subjects for study in the paper. In order to check the hypothesis about the influence of language expression of program representation on the prediction effectiveness, the genetic programming method based on several limited languages for recurrent sequences has been developed. On the single sequence sample the implemented method with the use of more complete language has shown results, significantly better than the results of one of the current methods represented in literature based on artificial neural networks. Analysis of experimental comparison results for the realized method with the usage of different languages has shown that language extension increases the difficulty of consistent patterns search in languages, available for prediction in a simpler language though it makes new sequence classes accessible for prediction. This effect can be reduced but not eliminated completely at language extension by the constructions, which make solutions more compact. Carried out researches have drawn to the conclusion that alone the choice of an adequate language for solution representation is not enough for the full problem solution of integer sequences prediction (and, all the more, universal prediction problem). However, practically applied methods can be received by the usage of genetic programming.


NUCLEAR-MAGNETIC MINI-RELAXOMETER FOR LIQUID AND VISCOUS MEDIA CONTROL Davydov Vadim Vladimirovich, Velichko Elena Nikolaevna, Karseev Anton Yurievich
The paper deals with a new method for registration of nuclear magnetic resonance signal of small volume liquid and viscous media being studied (0.5 ml) in a weak magnetic field (0.06 –0.08 T), and measuring of longitudinal T1 and transverse T2 relaxation constants. A new construction of NMR mini-relaxometer magnetic system is developed for registration of NMR signal. The nonuniformity of a magnetic field in a pole where registration coil is located is 0,410–3 sm–1 (the induction is В0 = 0.079 T). An electrical circuit of autodyne receiver (weak fluctuations generator) has been developed with usage of low noise differential amplifier and NMR signal operating and control scheme (based on microcontroller STM32) for measuring of relaxation constants of liquid and viscous media in automatic operating mode. New technical decisions made it possible to improve relaxometer response time and dynamic range of measurements for relaxation constants T1 and T2 in comparison with small sized nuclear-magnetic spectrometer developed by the authors earlier (with accuracy characteristics conservation). The developed schemes for self-tuning of registration frequency, generating amplitude of magnetic field H1 in registration coil, and amplitude and frequency of modulating field provide measuring of T1 and T2 with error less than 0.5 % and signal to noise ratio about 1.2 in temperature range from 3 to 400 C. A new construction of mini-relaxometer reduced the weight of the device to 4 kg (with independent supply unit) and increased transportability and operating convenience.


The paper presents new results concerning selection of optimal information fusion formula for ensembles of COTDR channels. Here C-OTDR is a coherent optical time domain reflectometer. Each of these channels provides data for appropriate automatic classifier which is designed to classify the elastic vibration sources in the multiclass case. Those classifiers form a so-called classifiers ensemble. Ensembles of Lipschitz Classifiers were considered. In this case the goal of information fusion is to create an integral classificator designed for effective classification of seismoacoustic target events. The Matching Pursuit Optimization Ensemble Classifiers (MPOEC), the Linear Programming Boosting (LP-Boost) (LP-β and LP-B variants), the Multiple Kernel Learning (MKL), and Weighing of Inversely as Lipschitz Constants (WILC) approaches were compared. The WILC is a brand new approach to optimal fusion of Lipschitz Classifiers Ensembles. The basics of these methods have been briefly described along with intrinsic features. All of those methods are based on reducing the task of choosing convex hull parameters to a solution of an optimization problem. All of the mentioned approaches can be successfully used for using in the C-OTDR system data processing. Results of practical usage are presented. 
The paper deals with development of methods and tools for mathematical and computer modeling of the multilevel network-centric control systems of regional security. This research is carried out under development strategy implementation of the Arctic zone of the Russian Federation and national safeguarding for the period before 2020 in the Murmansk region territory. Creation of unified interdepartmental multilevel computer-aided system is proposed intended for decision-making information support and socio-economic security monitoring of the Arctic regions of Russia. The distinctive features of the investigated system class are openness, self-organization, decentralization of management functions and decision-making, weak hierarchy in the decision-making circuit and goal generation capability inside itself. Research techniques include functional-target approach, mathematical apparatus of multilevel hierarchical system theory and principles of network-centric control of distributed systems with pro-active components and variable structure. The work considers network-centric management local decisions coordination problem-solving within the multilevel distributed systems intended for information support of regional security. The coordination problem-solving approach and problem formalization in the multilevel network-centric control systems of regional security have been proposed based on developed multilevel recurrent hierarchical model of regional socio-economic system complex security. The model provides coordination of regional security indexes, optimized by the different elements of multilevel control systems, subject to decentralized decision-making. The model specificity consists in application of functional-target technology and mathematical apparatus of multilevel hierarchical system theory for coordination procedures implementation of the network-centric management local decisions. The work-out and research results can find further application both within the coordination problem-solving of managerial decision-making in the multilevel network-centric control systems used for different subject domains, and within the analysis and synthesis problem-solving of integral complex security index of the regional socio-economic system, represented as regional security index matrix.
The Riemann problem of one-dimensional arbitrary discontinuity breakdown for parameters of unsteady gas flow is considered as applied to the design of Godunov-type numerical methods. The problem is solved in exact and approximate statements (Osher-Solomon difference scheme used in shock capturing numerical methods): the intensities (the ratio of static pressures) and flow velocities on the sides of the resulting breakdowns and waves are determined, and then the other parameters are calculated in all regions of the flow. Comparison of calculation results for model flows by exact and approximate solutions is performed. The concept of velocity function is introduced. The dependence of the velocity function on the breakdown intensity is investigated. A special intensity at which isentropic wave creates the same flow rate as the shock wave is discovered. In the vicinity of this singular intensity approximate methods provide the highest accuracy. The domain of applicability for the approximate Osher-Solomon solution is defined by performing test calculations. The results are presented in a form suitable for usage in the numerical methods. The results obtained can be used in the high-resolution numerical methods.
The paper deals with a mathematical model of dynamical system with single degree of freedom, presented in the form of ordinary differential equations with nonlinear parts in the form of polynomials with constant and periodic coefficients. A modified method for the study of self-oscillations of nonlinear mechanical systems is presented. A refined method of transformation and integration of the equation, based on Poincare-Dulac normalization method has been developed. Refinement of the method lies in consideration of higher order nonlinear terms by Chebyshev economization technique that improves the accuracy of the calculations. Approximation of the higher order remainder terms by homogeneous forms of lower orders is performed; in the present case, it is done by cubic forms. An application of the modified method for the Van-der-Pol equation is considered as an example; the expressions for the amplitude and the phase of the oscillations are obtained in an analytical form. The comparison of the solution of the Van-der-Pol equation obtained by the developed method and the exact solution is performed. The error of the solution obtained by the modified method equals to 1%, which shows applicability of the developed method for analysis of self-oscillations of nonlinear dynamic systems with constant and periodic parameters.


Problem statement. We justify the possibility of full-text search services application in both universal and specialized (in terms of resource base) digital libraries for the extraction and analysis of the context knowledge in the humanities. The architecture and services of virtual information and resource center for extracting knowledge from the humanitarian texts generated by «Humanitariana» project are described. The functional integration of the resources and services for a full-text search in a distributed decentralized environment, organized in the Internet / Intranet architecture under the control of the client (user) browser accessing a variety of independent servers. An algorithm for a distributed full-text query implementation is described. Methods. Method of combining requency-ranked and paragraph-oriented full-text queries is used: the first are used for the preliminary analysis of the subject area or a combination product (explication of "vertical" context, or macro context), the second - for the explication of "horizontal" context, or micro context within copyright paragraph. The results of the frequency-ranked queries are used to compile paragraph-oriented queries. Results. The results of textual research are shown on the topics "The question of fact in Russian philosophy", "The question of loneliness in Russian philosophy and culture". About 50 pieces of context knowledge on the total resource base of about 2,500 full-text resources have been explicated and briefly described to their further expert investigating. Practical significance. The proposed technology (advanced full-text searching services in a distributed information environment) can be used for the information support of humanitarian studies and education in the humanities, for functional integration of resources and services of various organizations, for carrying out interdisciplinary research. 


Kozyreva Olga Dmitrievna, Pushkareva Alexandra Evgenievna, Shalobaev Evgeniy Vasilievich, Biro Istvan
A key role in measuring the level of blood oxygenation is played by a dependence of the signal being measured on the wavelength at which measurements are performed. This paper presents a study of the blood oxygenation effect on the signal of diffusely scattered radiation in the range of 590-860 nm wavelengths. On the basis of previous studies the spectral characteristic of backscattered signal for different levels of blood oxygenation was obtained by the Monte Carlo modeling. In this model photon is characterized by coordinates and weight. The size, step and direction of photon motion from the initial point are determined at each step and specified by means of the random number generator. At each step the photon loses some weight due to absorption. Reducing of the photon weight is also taken into consideration as a result of Fresnel reflection and total internal reflection at two media borderland (the air and blood). The optimal wavelengths range for application in oximeters for sufficiently accurate non-contact measurements of blood oxygenation level by detecting scattered radiation is 650-750 nm. The adequacy of suggested model has been tested by comparing calculated characteristic with experimental results obtained by means of double integral sphere. The highest relative backscattered signal (0.17-0.21) is recorded at 700 nm. 
USING PRECEDENTS FOR REDUCTION OF DECISION TREE BY GRAPH SEARCH Bessmertny Igor Alexandrovich, Koroleva Julia A., Surinov Roman T.
The paper considers the problem of mutual payment organization between business entities by means of clearing that is solved by search of graph paths. To reduce the decision tree complexity a method of precedents is proposed that consists in saving the intermediate solution during the moving along decision tree. An algorithm and example are presented demonstrating solution complexity coming close to a linear one. The tests carried out in civil aviation settlement system demonstrate approximately 30 percent shortage of real money transfer. The proposed algorithm is planned to be implemented also in other clearing organizations of the Russian Federation.
The paper describes an approach to the emotional tonality assessment of natural language texts based on special dictionaries. A method for an automatic assessment of public opinion by means of sentiment-analysis of reviews and discussions followed by published Web-documents is proposed. The method is based on statistics of words in the documents. A pilot model of the software system implementing the sentiment-analysis of natural language text in Russian based on a linear assessment scale is developed. A syntactic analysis and words lemmatization are used to identify terms more correctly. Tonality dictionaries are presented in editable format and are open for enhancing. The program system implementing a sentiment-analysis of the Russian texts based on open dictionaries of tonality is presented for the first time.
Copyright 2001-2022 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.