Summaries of the Issue

OPTICAL ENGINEERING

An approach to photogrammetric processing of indirect optical location data
Grigor’ev Andrey N., Alexander I. Altuchov, Korshunov Denis S.
311
The paper proposes an approach to obtaining images of the objects under investigation based on indirect optical location data. The goal of the study is to increase the graphic similarity of the images and to assign them measuring properties. To achieve this goal, the concept of photogrammetric processing of frame images obtained by conducting indirect optical location in a certain way is formulated. The graphical similarity of the images is proposed to be improved by extracting photometric data related to the object and the background from the registered optical radiation. Based on the selected data, a statistical evaluation of the sample average of the optical radiation intensity from these sources is carried out. The obtained estimates are used to form a monochrome digital image. Adding measurement properties is done by converting the coordinates of the digital image to relative coordinates that have a metric expression. The reason for the decrease in the graphical similarity of the images obtained on the basis of indirect optical location data is determined. In particular, the addition of light waves from different sources, during the allotted exposure time of the photodetector, leads to the merging of the object and the background in the resulting image. The paper presents an approach to the separation of photometric data from different sources that is based on the observation of the phase difference between the emitted and recorded light waves. The authors define the mathematical apparatus for linking the obtained images to the relative coordinate system that is adapted for the case of indirect optical location. The concept of conducting indirect optical location using a special optoelectronic complex is proposed. The study describes the requirements for the equipment of an optoelectronic complex that generates and registers optical radiation with the required parameters. The results of an experiment on the formation of images with measuring properties confirm the feasibility of using the proposed method. Conducting an indirect optical location opens the way to obtaining images of an area that is inaccessible to humans. In particular, the results of the experiment demonstrate that the use of the proposed concept provides images of an object placed behind a light-tight obstacle, which are characterized by the presence of measuring properties and reflect the details of the object under study with high graphical similarit.
Sensing element for the formation fluid refractometer on the basis of total internal reflection
Alexandra S. Bobe, Voznesenskaya Anna O., Alexey V. Bakholdin, Vladimir E. Strigalev, Vladimir N. Vasiliev
320
When developing oil fields, there is an urgent task to quickly determine the type of pumped formation fluid, which includes formation gas, formation oil and formation water. In this paper, we propose a new type of a sensor element designed for flow refractometry of formation fluid based on the effect of total internal reflection. The sensor element is a taper tip of a conical shape made of sapphire and is 20 mm in length and 20 mm in diameter. The original shape of the sensor element is determined by a modified ray tracing method, taking into account analytical relations that determine the conditions for providing a larger dynamic range of measurements under specified physical, technological and design constraints. The conversion dependence of the tip is obtained for the wavelengths of 405 nm, 1064 nm and 3300 nm and allows determining the type of formation fluid (gas/water/oil). The proposed method enables the development of conical sensor elements based on the total internal reflection for downhole monitoring systems and optical threshold sensors of the refractive index.
326
The paper proposes a method for studying the color rendition of digital cameras. This parameter is conventionally assessed visually by color targets in photography and design. The proposed method compares the chromaticity values of standard test objects with their measured values on a digital camera using special setup features. The setup includes a light source, a collimator for uniform illumination of a reflecting screen with a test object, and a research object, i.e., a digital camera. The camera is positioned at a 45 degree angle to the screen. This setup follows the recommendations of the Commission Internationale de l’Eclairage (CIE) for color measurements. To measure the colorimetric characteristics of the samples, the authors adopted the 0/45 illumination / observation scheme. The test object is illuminated with a light beam, the axis of which makes an angle not exceeding 10° from the normal to the sample surface. The sample is observed at an angle of 45° ± 5° from to the normal. The angle between the axis of the illuminating beam and any of its rays does not exceed 5°. A type A source (color temperature 2856 K) acts as an illuminator. Standardized reference sets of colored optical glasses (light filters) characterized by known chromaticity coordinates were chosen as test objects. To evaluate the results, specialized software has been developed that allows one to select individual pixels, calculate their brightness in order to find the chromaticity coordinates and compare the results with reference values. The method was tested using the Canon EOS 60D digital camera. At the time of measurement, digital filters for correction, anti-aliasing, sharpening, as well as color adjustment were turned off in the camera. The paper presents colorimetric measurements using 58 colored optical glasses. The averaged values fall into seven groups. The selected color space for measurements is sRGB. The measurement results proved the possibility of using the proposed technique for analysing and choosing appropriate digital recording devices for colorimetric measurements in such areas as medicine, chemistry and food industry.
An analysis of methods for aberrated spot diagram center evaluation
Ivanova Tatyana V., Elizaveta Yu. Letova, Olga S. Kalinkina, Darya V. Nikiforova, Vladimir E. Strigalev
334
The paper considers spot diagram center evaluation methods and errors depending on aberration type and value. The authors present the modified center of mass method which provides higher accuracy of center evaluation for spot diagram with coma. Error estimation methods involved using simulated spot diagram with symmetrical and non-symmetrical aberrations of the third and fifth order and their combinations. The error of center evaluation by the maximum value method and center of mass method is analyzed. The proposed modified center of mass method gives higher weight for pixels with higher intensity that leads to better sensitivity of the method and it is compared with other methods. The center of mass method can evaluate accurate center position only for coma-free spot diagram. The maximum value method cannot evaluate accurate center position for spot diagram with coma either and for coma-free spot diagram it can also produce larger errors than the center of mass method. The modified center of mass method is more robust and evaluates center for spot diagram with coma and other aberrations more accurately. The modified center of mass method shows higher accuracy while evaluating center of spot diagram with aberration, and hence higher accuracy of modulation transfer function evaluation by spot diagram. The precise evaluation of spot diagram center will also increase the convergence of the phase retrieval method with parametric optimization techniques.
342
The paper considers the construction of optoelectronic systems for monitoring near-Earth space, the choice of algorithms for identifying and obtaining the most reliable coordinate and non-coordinate information about space objects of natural and man-made origin. Experimental mock-up studies using the developed installation were performed. The plant allows the calibration of the optoelectronic system and the study of algorithms for obtaining coordinate and detailed data about the observed objects. The authors apply the method of image registration by a telescopic system in the astrograph with a digital camera mode and a digital camera with a microlens array mode. The work uses the methods for analyzing two-dimensional images by algorithms for measuring binary clusters in an image structure, investigating the brightness structure of an image with a circular boundary in a given area, determining the centers and radii of the circles inscribed in clusters, calculating and estimating the maxima for the curves of the coefficients of continuous wavelet transformation in the image profile lines with real wavelets. The composition and structure of a complex of algorithms and a methodology for their application have been developed. The methodology makes it possible to increase the accuracy and reliability of information obtained about the observed objects in a wide range of changes in the characteristics of the background target environment. The results substantiate the possibility of increasing the accuracy and reliability of coordinate information about the observed objects by analyzing the curves of the coefficients of continuous wavelet transform or analyzing the brightness gradient, provided that the algorithm for analyzing clusters of binarized images is used. The algorithm makes it possible to determine the areas of localization of objects of interest in the observed space. The developed methodology can be applied to assess the accuracy and reliability of the results of determining the coordinates and detailed features of objects. At the same time, it is possible to scale the algorithms to the means of observation and the tasks being solved, which makes it possible to use them in automated monitoring systems for near-earth space and increases the efficiency of detection and identification of objects.
352
The authors carried out the estimation of the permissible positioning errors when displaying reflective phase holograms intended for application in holographic photolithography on solid media using electron-beam lithography devices. The work deals with the projection of holographic photolithography based on computer generated Fresnel holograms. The synthesis of holograms involved mathematical modeling of the physical processes of hologram recording and reconstruction using the following parameters: the characteristic size of the binary object is 20 × 20 nm or 80 × 80 nm, the wavelength of the radiation is 13.5 nm, the pixel size of the hologram is 20 × 20 nm, the distance between the planes of the object and the hologram is from 20.4 to 31.6 microns, the angle of incidence of the reference wave is 14º42′. For each of the three objects used in the modeling (namely, “Angles”, “Line grid target” and “Enlarged angles”), four computer-generated holograms were synthesized with different values for standard deviation of the pixel positioning error. The simulation of these errors was carried out by violating the equidistance of the points (pixels) on the hologram aperture. The holograms distorted in this way were subjected to the standard procedure of numerical reconstruction in virtual space. Comparison of the quality of the images obtained at different values of the positioning errors of the hologram pixels made it possible to evaluate their influence on the quality of the reconstructed image. It has been shown that the criterion used for estimating the permissible value of the positioning error in analog holography cannot be applied to synthesized holograms, because of the peculiar properties of interference fringes in discrete holograms. The results demonstrated a significant dependence of the permissible (in terms of image quality) pixel positioning errors on the object presentation method. The analysis revealed the impossibility of applying a single tolerance for pixel positioning errors to all possible synthesis conditions of computer-generated holograms and hence indicates the necessity of including a feature for estimating permissible hologram positioning errors into the software package for the synthesis and reconstruction of holograms. Based on the analysis of the technological parameters of modern electron-beam lithography devices, the authors confirmed the possibility of their use for manufacturing computer-generated holograms in modern high-resolution photolithography. Modeling the permissible positioning errors of computer-generated holograms by the proposed method allows evaluating the practical possibility of producing holograms with the required structure and high quality of the reconstructed image with a specific electron beam lithography device.
The study of spontaneous domain nucleation in the interelectrode gap of phase modulator based on titanium indiffused waveguides in lithium niobate crystals
Stanislav M. Aksarin, Alena V. Smirnova , Vladimir A. Shulepov , Parfenov Peter Sergeevich, Vladimir E. Strigalev, Meshkovsky Igor K.
361
The paper presents the analysis of nucleating kinetics and growing of switched domains in the surface layer of monodomain lithium niobate X-cut crystal in the interelectrode gap of integrated optical phase modulators. The work proposes the morphology model of domains growing along the boundary of surface electrodes in X-cut phase modulators. The mechanism of spontaneous needle-like domain growing as a result of the electric field induced by the pyroelectric effect at temperature changing of the crystal was theoretically substantiated. The Comsol Multiphysics cross-platform was used for the numerical estimation of the pyroelectric field in the interelectrode gap. The needle-like domain structures were studied experimentally at industrial samples of integrated optical phase modulators based on Ti:LiNbO3 waveguides. The experimental research of the form and size of domains was performed with the anisotropic etching method by HF solution and followed by visual analysis. For non-destructive testing, the authors used scanning electron microscopy and piezo-response force microscopy. For the first time, the morphology of needle-like domains occurring in the interelectrode gap of phase modulators based on lithium niobate was experimentally studied. The results showed the theoretical and numerical model of domain growing that involves the pyroelectric nature of the electric field. It was demonstrated that along the electrode boundary, the needle-like domains grow up to 20 μm long at normal conditions and achieve 30 μm after the thermal shock by cooling at ∆T = – 125 °С. The discovered switched domains in the interelectrode gap can affect electro-optical characteristics of integrated optical phase modulators with the lithium niobate base and should be taken into account in the future design of electrode topology and modulator usage.

AUTOMATIC CONTROL AND ROBOTICS

Adaptive observer design for time-varying nonlinear systems with unknown polynomial parameters
Binh Khac Dang , Pyrkin Anton Alexandrovich, Bobtsov Alexey A., Vedyakov Alexei A.
374
The paper considers the construction of optoelectronic systems for monitoring near-Earth space, the choice of algorithms for identifying and obtaining the most reliable coordinate and non-coordinate information about space objects of natural and man-made origin. Experimental mock-up studies using the developed installation were performed. The plant allows the calibration of the optoelectronic system and the study of algorithms for obtaining coordinate and detailed data about the observed objects. The authors apply the method of image registration by a telescopic system in the astrograph with a digital camera mode and a digital camera with a microlens array mode. The work uses the methods for analyzing two-dimensional images by algorithms for measuring binary clusters in an image structure, investigating the brightness structure of an image with a circular boundary in a given area, determining the centers and radii of the circles inscribed in clusters, calculating and estimating the maxima for the curves of the coefficients of continuous wavelet transformation in the image profile lines with real wavelets. The composition and structure of a complex of algorithms and a methodology for their application have been developed. The methodology makes it possible to increase the accuracy and reliability of information obtained about the observed objects in a wide range of changes in the characteristics of the background target environment. The results substantiate the possibility of increasing the accuracy and reliability of coordinate information about the observed objects by analyzing the curves of the coefficients of continuous wavelet transform or analyzing the brightness gradient, provided that the algorithm for analyzing clusters of binarized images is used. The algorithm makes it possible to determine the areas of localization of objects of interest in the observed space. The developed methodology can be applied to assess the accuracy and reliability of the results of determining the coordinates and detailed features of objects. At the same time, it is possible to scale the algorithms to the means of observation and the tasks being solved, which makes it possible to use them in automated monitoring systems for near-earth space and increases the efficiency of detection and identification of objects.

MATERIAL SCIENCE AND NANOTECHNOLOGIES

Development of a new plasma technology for producing pure white corundum.
Viktoriia E. Kison , Alexander S. Mustafaev , Vladimir S. Sukhomlinov
380
The paper presents the results of mining and primary approbation of plasma method for producing pure white corundum. Upgrading ways of pure corundum production is an important task for industry as part of reducing the energy consumption and environmental contamination. The purposes of the research at this stage are as follows: the selection of raw materials, formative evaluation for characteristics of the technology, conducting an experiment on melting and assessment of the sample. The corundum melting is conducted in the reactor using high-voltage plasmatron. Mixture of argon and 25–30 percent of nitrogen is used as the working fluid. The authors suggest using a four-layered protection of a melting reactor in order to ensure both thermal insulation properties and strength characteristics. This is especially relevant under temperature difference of the order of 2000 K and elimination of defective crystallization of the melt from the walls of the reactor. As a result of an experiment on melting alumina marked G-00 using high-voltage air powered plasmatron, the sample with alumina oxide in the amount of 99.79 percent and with absolute hardness equal to 500 was obtained. Further experiments make it possible to determine the prospects of using the proposed technology to obtain samples with an increased content of aluminum oxide. The paper discusses the application of the described technology for industrial production of pure corundum single crystals. The technology will make it possible to obtain samples to be used as abrasives for optical systems and for the production of sapphire glasses and scalpels.
386
Additive technologies are promising for manufacturing parts of metal navigation devices with complex shapes. Finite element analysis is used during the designing of such items. The modeling accuracy is determined by the correctness of the specified physical properties of materials. The properties of materials used in 3D printing significantly differ from the ones used in the traditional manufacturing. Researchers focus on such characteristics as Young’s modulus, Poisson’s ratio, hardness and strength. Meanwhile, some applications require dynamic properties. The paper presents the investigation and comparison of damping properties of three steel parts produced by different methods. The first part is manufactured by 3D printing with melting in the transverse direction, and the second part is done by melting in the longitudinal direction, while the third one is traditionally manufactured. The parts are shaft shaped with constant cross-section and have the same geometric dimensions. The TIRA TV 5220 / LS-120 stand is used. A piezoelectric accelerometer is installed at the loose end of the part. The tests are carried out in the frequency range from 15 to 3500 Hz and with an acceleration of 19.6 m/s2 (2 g). The accelerometer output is used to calculate the damping coefficient. The results are verified by comparison with the finite element modeling results. The damping coefficients of transverse and longitudinal 3D-printed parts are 0.022 and 0.006, respectively. The damping coefficient of the traditional manufactured part is 0.023. The difference of 3D-printed parts damping coefficients can be explained by the denser fusion of powder granules when printing one layer than between layers. In this case, a crystal structure with greater rigidity in the printing plane is formed and it limits the dissipation of vibration energy due to internal friction. Finite element modeling shows mismatch between the experimental and calculated values of the printed parts natural frequencies. Considering that the values of natural frequencies are largely determined by Young’s modulus, a parametric optimization of its value is carried out. It was found that the value of Young’s modulus does not correspond to the values determined during tensile tests for similar samples. Thus, 3D-printed parts have different vibration and static stiffness. This is not typical for metals and should be taken into account in simulations. The research results can be used in the simulation model development of steel 3D-printed parts and in the design of digital twins for navigation devices. It allows one to estimate vibration resistance of promising products at the early stages of their design and to optimize the construction minimizing stress.

COMPUTER SCIENCE

394
The paper focuses on revealing insider information leaks of financial markets during investment consulting. An original dataset was created, containing the records of the conversations between consultants and clients, presented in the formof dialogs in text format. The applicability of machine learning methods for automating the detection of leaks arising in a conversation between a consultant and a client has been studied. The authors examined the applicability of the following supervised machine learning methods for constructing and training a classifier: probabilistic (Naïve Bayes classifier), metric (k-nearest neighbors algorithm), logical (random forest), linear (support vector machine), and methods based on artificial neural networks. The paper considers various approaches to the construction of a natural language text model, such as tokenization (bag of words, word n-grams: bigrams and trigrams) and vectorization (one-hot encoding). The proposed algorithm for detecting financial markets insider information leaks is based on the use of support vector machine (SVM) and tokenization by bigrams. The obtained results demonstrate that SVM and bigram tokenization provide the highest leakage detection accuracy. The research results can be used in cybersecurity tools development, as well as for the further elaboration of natural language processing methods dealing with information security problems.
401
The work focuses on software-defined network security, as it was always one of these foremost critical concerns due to the centralized nature in SDN architecture where many serious attacks in traditional networks still appear in SDN networks such as ARP spoofing attack despite many existing security algorithms, methods and systems. In this work, we proposed a new approach to secure SDN from an ARP poisoning attack. The new solution extends the controller with a new module that uses a new algorithm to detect and mitigate the ARP spoofing attacks according to three states of each host in the network. The new mechanism involves the DHCP and manual assignment of IP addresses using three classes to classify the hosts according to their situations in the network. The CHT helps to set the host in an intermediate state between verifying and banning and detect the attack according to the next step of the host. The proposed mechanism was tested successfully in a simulated environment using Mininet and POX controller. The solution was effectively able to accomplish the objective for which it was built, with a limited overhead on the network. This proposed solution neither has an extra overload in the network, nor requires any changes in the infrastructure or additional hardware to install. According to the experiment results of this solution, the average time to detect the ARP spoofing attack is about 11 ms, with minor overhead on the controller CPU.

MODELING AND SIMULATION

410
The study considers the operation of an unmanned aerial vehicle in hovering mode over a flat landing platform. As a propulsion system, impellers are used, which are a system of a propeller rotating inside an air ring. The air ring is a body of revolution with an aerodynamic profile in cross section. The paper investigates the effect of unsteady interaction of vortex flows with the design of an aircraft by two alternative numerical methods, one of which is vortex-resolving. Numerical calculations are performed using the traditional turbulence modeling approach based on the averaged Navier–Stokes equations (RANS, Reynolds Averaged Navier–Stokes), where the turbulence is assumed to be isotropic, and the eddy-resolving Large Eddy Simulation method. The main feature of the latter is as follows: a turbulent flow is represented as the superposition of the motion of large-scale and small-scale turbulences. After discretizing the flow using a filtering operation, large-scale turbulence, which depends directly on the boundary conditions, is solved from the full Navier–Stokes equations. Small-scale turbulence has isotropic properties and is modeled similarly to semi-empirical RANS methods. The technique allows one to accurately calculate the vortex structure of any flow directly from the equations of motion using relatively low computing power, in contrast to the RANS models, which simulate the flow using a simplified mathematical model and can provide satisfactory accuracy only for a limited range of problems. The results indicate that eddy-resolving methods for modeling turbulence, in contrast to the methods based on averaged Navier–Stokes equations, make it possible to estimate the effect of aperiodic perturbations on the design of aircraft arising from the interaction of large eddies with each other and with the underlying surface. Such phenomena are accompanied by side impacts of a shock nature on the impeller rings, which can lead to loss of aircraft stability. Under conditions of a small propeller step, the use of an air ring results in a significant increase in the air flow passing through the rotor rotation loop, an increase in thrust due to the creation of flow circulation around the airfoil of the ring, and a decrease in the power on the propeller. Even though the effect of using an air ring disappears with a large incoming flow, this design is considered very promising for use on aircraft with vertical takeoff and landing. This mode of operation is the most energy-consuming and determines the greatest requirements for the lifting force of the power plant. The results of this work have demonstrated that numerical methods based on averaging the Navier–Stokes equations and the use of classical turbulence models of the k–ω or k–ε type, which are widely used in numerical modeling of propellers, in takeoff and landing modes fail to detect aperiodic unsteady phenomena associated with the interaction of large eddies, in contrast to eddy-resolving methods for modeling turbulence.
Mathematical modeling and identification of surface vessel model parameters
Nguyen Khac Tung, Vlasov Sergey M., Aleksandra V. Skobeleva
418
The paper considers the problems of modeling and identification of parameters for models of surface ships. The proposed identification method is applied to a modified second order Nomoto model for ship steering. The identification algorithm is based on the Dynamic Regressor Extension and Mixing Method (DREM) that is performed in two steps. At the first stage parameterization is used for a regression model, in which the regressor and regression depend on the measured signals, namely, longitudinal, lateral and angular velocities and steering angle. At the second stage a new regression model is built using linear stable filters and delays. Finally, the parameters are estimated by the standard gradient descent method. The paper proposes a new algorithm which identifies the parameters for models of surface ships. The authors analyzed the prospects of the proposed estimating method by computer experiments. Experiments have shown the advantage of the method: when using the gradient descent method, the transient time spent to estimate the signal parameters is much longer than using the DREM method. At the same time, in the case of using the DREM method, there is no overshoot. The results of the work can serve as a basis for methods, algorithms and software for designing ship automated navigation systems and control systems for other modes of transport. This is confirmed by the simulation results.
Methodological support of the working group in predicting the results of the classification expertise
Burkov Alexander T., Pavel I. Paderno , Farrukh E. Sattorov, Elena A. Tolkacheva
426
The paper considers the specifics of the working group activity in preparation of expertise classification and the features of approaches for the choice of expert assessments and expert selection methods. The analysis focuses on potentially weak (illegibly) formalized customer requirements, which are quite typical for expertise classification. The used methods suffer from many drawbacks, which make them practically inapplicable in terms of planning, preparing and predicting the possible expertise classification reliability. The authors developed a new approach for predicting the reliability of the expertise classification at the stage of its preparation that depends on the reliability of the proposed expert group. The approach involves a probabilistic representation of the possible results of the work (namely, classification) of particular experts. The authors propose a number of probabilistic models (probabilistic matrices), which reflect the reliability (correctness) of the classification of certain objects both at the level of particular experts and at the level of entire expertise results. The set of procedures developed for a random group of experts allows obtaining probabilistic characteristics of the objects classification correctness when the group works as a part of an expert commission. The proposed approach can be used as a tool for working groups, which not only simplifies the process of an expert group selection, but also allows predicting the reliability of possible results and thereby makes it possible to take measures in advance in order to meet customer requirements. This approach can serve as a methodological basis for automating the problem solution in the expert selection process for the expertise classification at the level of its preparation, depending on customer requirements (restrictions). The proposed models and procedures will improve the efficiency of expertise classification, as well as save time for its preparation.

BRIEF PAPERS

433
Most of the medical data in hospital information systems databases are stored in an unstructured form. Techniques for processing unstructured records are widely presented in scientific papers focused on English data. This paper proposes a method for intellectual analysis of unstructured allergy anamnesis in Russian in order to identify the presence and type of allergy and intolerance of a patient. The method is based on machine learning algorithms and uses international standards for the exchange of medical data and terminology standards, such as FHIR and SNOMED CT. As a result of the experiment, about 12 thousand medical records were processed. F-measure for the developed classification models ranged from 0.93 to 0.96. The models showed high values of metrics for evaluating the effectiveness of the models. In the future, structured data can be used in models for predicting medical risks. Further development of methods for structuring medical texts will ensure the interoperability of medical data.
437
The paper presents an analysis of the existing methods for assessing information security risks, their features, advantages and disadvantages, as well as determines the possibility of using such techniques for assessing information security risks in financial institutions. Criteria for comparing information security risk assessment methods have been formed, the advantages and disadvantages of the methods are described. It is shown that, despite the requirements of regulators for assessing information security risks, most of the regulatory documents deal with operational risks. The evaluation of information security risks of credit and financial institutions does not have sufficient regulation and formalization.The authors substantiate the necessity of developing a method for assessing information security risks for credit and financial organizations, taking into account the features of risk assessment inherent to the mentioned organizations. The paper considers the need to create lists of existing threats to the credit and financial sector and their linking to existing vulnerabilities to optimize the process of assessing information security risks. The development of a methodology for assessing information security risks will increase the degree of compliance of credit and financial institutions with the requirements of international, state and industry standards through an optimal set of protection measures and models for evaluating information security risks.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика