Summaries of the Issue

REVIEW PAPER

1
There are several various methodological approaches well known for the current level safety ensuring of industrial control systems. Two worlds apart methodological approaches have been considered fundamentally over the past few years: the proposal to implement additional information security countermeasures without changing the basic IT-infrastructure, and creation of a new total isolation concept (for example, the Zero Trust Architecture). These methodological approaches do not lead to stability and security of industrial control systems as noted by the world centers of competence in Russia (Group-IB, Positive Technology) and in the world (IBM, MS, Cisco, CheckPoint). Reports of new and new critical vulnerabilities never stop, including a significant number in relation to industrial control systems. The problem of safety ensuring dates from the XX century, has passed several stages of maturity, and, presently, the approach “from functionality” is the most obvious. In general, this approach consists in the fact that the formation and solution of a problem begins when the manufacturer creates a solution based on a specification consisting of functional safety requirements. Then the safety assessment based on trust requirements is carried out. For the overall process of the safety ensuring of industrial control systems, unfortunately, it is typical, that, so far, the industry has not yet developed a holistic culture of consumption of secure IT-components with security evidence that can be traced to the required level. Only a few suppliers in the world and in Russia are ready to offer components that have a proven level of Safety Integrity Level in accordance with the requirements of IEC 61508 and/or 61511 series. The present publication considers the issue of the safety ensuring of industrial control systems in such technical aspects as: the required resources, the specified speed, the management quality, the validation methods, estimation of residual risks and other computable estimates. A brief overview of existing approaches is presented and some possible solutions for the defined problem are given.

OPTICAL ENGINEERING

AN ANALYSIS OF ADDITIONAL ERRORS OF THE OPTICAL-ELECTRONIC SYSTEM FOR MONITORING THE RAILWAY TRACK POSITION
Tuan Pham Ngoc, Alexander N. Timofeev, Korotaev Valery Viktorovich, Victoria A. Ryzhova, Joel Jose Puga Coelho Rodrigues
15
Subject of research. The paper considers an additional error in the railway track position control by stereoscopic methods and examines how the components of the error influence on the measurement results of linear displacements of the track in the profile and plan. Method. The authors propose to use active reference marks located on the supports of the contact network and describe a method for assessing additional errors of the track position control made by a stereoscopic optical-electronic system. On the basis of computer modeling, we investigate the degree of influence of the components of the system error on the total value of the additional error. The conclusions are formulated through analysis of the diagrams obtained via a computer model and the results of an experimental study of the system’s technological samples. Main results. The work reports on the relationship between the linear displacements of the track in the profile and plan with the coordinates of the reference marks as well as the parameters of the system elements with the informative parameters of the signal. A mathematical description of the components of additional errors of the stereoscopic system with changes in the medium temperature, vibration amplitude, vertical gradient of the air path temperature, and movement speed is proposed. It is shown that the components of the error caused by vibrations, inertia of the system, and thermal deformation of the base unit produce the greatest effect on the total additional error in decreasing order. Theoretical, experimental and field testing have shown that the assessment of the random component of the displacement control error when measuring the track position does not exceed 0.8 mm in the longitudinal profile, and 1.8 mm in the plan. Practical relevance. The authors developed a method for studying additional errors in determining the displacement of the track in the profile and in the plan by means of a stereoscopic control system. The prepared recommendations can reduce the most strongly influencing components of the additional error. A stand and software for static and dynamic testing of physical models have been created. The proposed solutions are aimed at achieving full automation of control of the actual position of the railway track when servicing continuous straightening technologies with high-performance track machines using stereoscopic optical-electronic systems. The results can be applied by developers of high-precision optical-electronic systems for ensuring the safety of railway traffic.
24
Subject of study. The authors propose an integrated approach to the problem of determining the aperture diameter of a probe laser that forms a laser reference star used in ground-based adaptive optoelectronic systems. The relevance of this study is due to the fact that modern large-aperture optical systems for tracking natural stars and artificial objects (spacecraft or fragments of space debris) widely implement the technology of forming laser stars and use them as reference sources for correcting phase distortions of a turbulent atmosphere. The choice of the energy and spatiotemporal characteristics of laser guide stars is related both to the parameters of the probe laser (radiation power and aperture diameter), which forms the guide star, and to the spatiotemporal characteristics of the atmosphere. Method. The diameter of the probe laser aperture (for the near and far radiation zones) is estimated taking into account the spatial coherence radius of the atmosphere r0, the radiation intensity and angular divergence of the laser beam, its random root-mean-square angular deviation (jitter) with respect to the calculated direction to the space objects. Main results. The estimation of the angular divergence of a laser beam is based on a comparative analysis and generalization of theoretical results obtained by calculating the optical resolution in systems for acquiring images of natural space objects. It is shown, in particular, that when determining the size of the aperture along with the value of the angular divergence of the probe beam, it is necessary to take into consideration the decrease in its radiation intensity with an increase in the diameter with respect to the radius of atmosphere coherence. The practical significance. The results are essential, firstly, for the development of ground-based adaptive optoelectronic systems for tracking artificial space objects, and secondly, for determining the geographic locations of the optoelectronic systems taking into account the astronomical climate.
APPROACH TO GETTING IMAGES OF OBJECTS BASED ON INDIRECT LASER LOCATION DATA
Grigor’ev Andrey N., Altukhov Alexander I., Denis S. Korshunov
31
Subject of Research. The paper presents an approach to obtaining images of objects based on indirect laser location data. The concept of recording and processing photometric data about an object is developed, which implies the joint usage of an optical quantum generator and a camera with a photodetector based on single-photon avalanche diodes. Registration of an optical radiation scattered by an object is performed if the object is placed outside the sight line of the probing equipment, for example, behind a light-tight obstacle. Photometric data processing is aimed at the object shape modeling. Methods. Indirect laser location involves irradiation of an object with a series of light pulses and registration of scattered optical radiation. Since the object is located outside the sight line of the probing equipment, light pulses are propagated from the optical quantum generator to the surface that re-reflects these pulses in the direction of the object under study. Photometric data are formed as a registration result of optical radiation scattered by the object, propagating in the opposite direction. Vertical and horizontal scanning of the image is provided by the laser beam movement during the scanning of space. The object image is formed as a result of a series of sequential operations to extract photometric data about the object from the scattered optical radiation, calculate brightness values, and determine the positions of the raster voxels in the space of a rectangular coordinate system. Main Results. A conceptual model of indirect laser location is developed based on: a pulsed laser, scanning and focusing optical systems, a camera for scattered optical radiation recording, time synchronization device for generation and registration of light pulses. An approach is proposed to the processing of indirect laser location data, which provides three-dimensional images of objects. An experiment results are presented on the formation of an object image based on open materials of laser location. Practical Relevance. The indirect laser location system application gives the possibility to get object images in the absence of direct eye contact. The results of the image formation experiment demonstrate that when an object is placed behind a light-tight obstacle, it is possible to get a reliable idea of its structure and shape. The image obtained in accordance with the proposed approach is characterized by high graphic similarity and is a source of information about objects that are difficult to access.
DESIGN STRATEGY AND MANAGEMENT OF ABERRATION CORRECTION PROCESS FOR LENS WITH HIGH COMPLEXITY INDEX
Livshits Irina Leonidovna, Tatiana V. Tochilina, Oliver Faehnle , Svetlana L. Volkova
40
Subject of Research. The strategy of designing optical imaging systems is defined, which consists in combining various design methods and is based on the theory of structural and parametric synthesis, ensuring the presence of the necessary correction parameters in the original optical system. The control of the correction of residual aberrations by using the correction capabilities of the scheme is considered. The composition of the evaluation function, recommendations for automated aberration correction, calculation of tolerances for the manufacture of an optical system, as well as the decision-making procedure for choosing the best optical scheme option are proposed. Method. The strategy is based on the analysis of the properties of optical imaging systems and their classification, followed by the determination of the system complexity index, as well as on the classification of aberrations and the development of methods for their correction. Main Results. An algorithm is proposed for implementing the strategy based on understanding the properties of the optical system and its correction capabilities. Analysis of the design specification for determining the complexity index of the optical imaging system, the choice of the optimal scheme and the correction capabilities of the scheme are determined. The composition of the evaluation function is considered. The “default” function evaluation and the function compiled by the user are compared taking into account the recommendations proposed in this paper. Practical Relevance. The implementation of the proposed strategy creates conditions for reducing the design time, especially for systems of increased complexity with forced technical characteristics. An algorithm flowchart is presented, its parameters and correction capabilities are determined. Automated correction is carried out based on the received recommendations. The tolerances for manufacturing are calculated and the level of its manufacturability is determined.
FOURIER SPECTROSCOPY IN BLOOD PLASMA STUDY WITH TYPE TWO DIABETES
Alla P. Nechiporenko, Ulyana Yu. Nechiporenko, Sitnikova Vera E
52
Subject of Research. The paper presents the study of possibilities of a spectral technique for evaluating changes in the optical properties of carbohydrates and plasma proteins in humans with the initial stage of type two diabetes. The optical characteristics are compared with those obtained by the conventional method of sugar curves using sucrose, honey and milk protein as a provoking load at various stages of treatment with antidiabetic drugs. Method. The study was carried out by infrared spectroscopy of disturbed total internal reflection in the range of 4000–500 cm–1. Testing for glucose tolerance at all stages of treatment was performed by the biochemical glucose oxidase method. Main Results. The use of provocateur products of various nature and an extended glucose tolerance test makes it possible to identify the spectrum bands associated with the presence of glucose (1104 cm–1) and fructose (1115 cm–1), differentiating on the left branch of the complex carbohydrate band (1075 cm–1) of the infrared spectrum of native plasma. It is shown that the change in the intensity of the Amide-I and Amide-II bands of fractionated plasma proteins is associated with the main glucose transporter — globulin fraction proteins. Practical Relevance. The revealed features of changes in blood plasma spectra in the course of the conducted studies give reason to believe that the non-destructive method of Fourier spectroscopy does not require a large volume of the studied material and its preliminary sample preparation. The method is promising as an express tool and can be used to obtain additional information when studying the influence of various provoking factors on the nature of changes in the optical characteristics of protein-lipid-carbohydrate complexes and globular proteins of blood plasma, as well as for preliminary diagnosis and the treatment supervision of type two diabetes mellitus.
DEFOCUS IMPACT ANALYSIS ON TELESCOPE WAVEFRONT RECONSTRUCTION BY SCATTERING SPOT WITH PARAMETRIC OPTIMIZATION TECHNIQUE
Ivanova Tatyana V., Olga S. Kalinkina, Julia O. Kushtyseva, Dmitriy S. Zavgorodniy
65
Subject of Research. Wavefront reconstruction by the known scattering spot intensity with parametric optimization is presented. The Zernike polynomial coefficients of the wave function expansion as optimization parameters are used. The known defocus impact is performed on the method convergence. Methods. For method verification we used simulated scattering spot with four known Zernike coefficients (coma c31, s31 and astigmatism с22, s22) as input data. Then parametric optimization was applied to simulated scattering spot. The cost function was the standard deviation of the reference scattering spot from the one calculated at each optimization step. As a result, we got Zernike reconstructed coefficient values that can be compared with initial ones. If result coefficient values differed from initial ones less than 10–5λ, the restoration was successful. For better method conjugation various defocus values were used related to the best focus position. Main Results. The presented parametric optimization method gives the possibility to restore Zernike coefficients, describes coma and astigmatism in wavefront description by the known scattering spot intensity. Focused scattering spot intensity is not enough to restore aberration coefficients, but with the known defocus method it becomes more stable. It is shown that for successful restoration the use of defocus Zernike coefficient from the best focus position in the range of 0.1–0.5λ is enough. Practical Relevance. Wavefront reconstruction by the known defocused scattering spot intensity with parametric optimization technique can be used for telescope alignment during operation. By tolerance data, calculated for all optical systems in optical system design software, it is possible to define tilt and decenter of optical details direction by Zernike coefficient values. It is an especially important task for telescopes without axial symmetry.

AUTOMATIC CONTROL AND ROBOTICS

73
The paper considers the idea of automation of sea transport ships, including unmanned ones. The study necessity is the importance of sea traffic safety, especially in view of navigation in the Northern seas of the Russian Federation. Economic efficiency is analyzed in terms of reducing operating costs and freeing up additional space on the ship for cargo placement. An assessment of the prospects for the use of unmanned ships in the seas of the Arctic Basin of the Russian Federation is performed. The research uses the following methods: analogies to determine some common technical solutions, abstraction to assess the prospects for using unmanned vessels for cargo transportation in the seas of the Arctic Basin of the Russian Federation at the theoretical level, and a hypothetical method for the usage in determining the criteria and requirements of international and Russian legislation for commissioning unmanned ships. The main obtained results are the following. The analysis of both foreign and Russian projects of autonomous ships of various types and remote control systems is performed; the elements and entities of the control system of an autonomous ship are defined; the system of automation levels is described; the existing regulatory framework is analyzed to ensure sea traffic safety, and the proposals for its adjustment are made.

COMPUTER SCIENCE

85
Subject of Research. The paper proposes a solution for the human psyche automatic creation based on his speech behavior analysis. It is shown that messages in social networks, instant messengers and chats can be used to form a training data set, both in the format of text messages and audio and video calls. The functions of the psychological type classifier constituents are revealed by the human speech behavior. A comparison is made between multiclass and binary classification based on the loss function minimization. Methods. The human psyche corresponded to the Myers-Briggs type indicator, which subsumes a person to one of 16 types. The technologies of Text Mining for natural language processing and a deep learning model for speech processing were used. The data set for training and testing was formed by recordings of people’s speech translated into text format. Class labels were formed by the content of a text parameters vector, which is a dictionary of frequently encountered words. A deep learning algorithm was used for the human psyche automatic creation and was based on recurrent neural networks of the Long Short-Term Memory type. The algorithm was tested both for multiclass and binary classification. The objectivity of the proposed approach to a human psyche creation was ensured by the variety of content created by a person at various time in accordance with life situations, profession, hobbies and other circumstances. Main Results. A new approach to the automatic human psyche creation is proposed, based on the binary classification and a deep learning model. The convergence of the binary classification results with the test set of the speech behavior of various people is demonstrated. The Long Short-Term Memory network application in binary classification makes it possible to achieve an accuracy equal to 83 % of the psychological type correct determination and reduce the losses to 25 %. Practical Relevance. Automatic human psyche creation based on his speech behavior enables various specialists (such as psychologists, sociologists, human resources staff members) to make decisions when working with a specific person. Analysis of a human personal qualities by his speech behavior is software-implemented.
GOODPOINT: UNSUPERVISED LEARNING OF KEY POINT DETECTION AND DESCRIPTION 
Anatoly V. Belikov, Potapov Alexey Sergeevich, Artem V. Yashchenko
92
Subject of Research. The paper presents the study of algorithms for key point detection and description, widely used in computer vision. Typically, the corner detector acts as a key point detector, including neural key point detectors. For some types of images obtained in medicine, the application of such detectors is problematic due to the small number of detected key points. The paper considers a problem of a neural network key point detector training on unlabeled images. Method. We proposed the definition of key points not depending on specific visual features. A method was considered for training of a neural network model meant for detecting and describing key points on unlabeled data. The application of homographic image transformation was basic to the method. The neural network model was trained to detect the same key points on pairs of noisy images related to a homographic transformation. Only positive examples were used for detector training, just points correctly matched with features produced by the neural network model for key point description. Main Results. The unsupervised learning algorithm is used to train the neural network model. For the ease of comparison, the proposed model has a similar architecture and the same number of parameters as the supervised model. Model evaluation is performed on the three different datasets: natural images, synthetic images, and retinal photographs. The proposed model shows similar results to the supervised model on the natural images and better results on retinal photographs. Improvement of results is demonstrated after additional training of the proposed model on images from the target domain. This is an advantage over a model trained on a labeled dataset. For comparison, the harmonic average of such metrics is used as: the accuracy and the depth of matching by descriptors, reproducibility of key points and image coverage. Practical Relevance. The proposed algorithm makes it possible to train the neural network key point detector together with the feature extraction model on images from the target domain without costly dataset labeling and reduce labor costs for the development of the system that uses the detector.
A QUANTUM-LIKE SEMANTIC MODEL FOR TEXT RETRIEVAL IN ARABIC
Alaa Shaker, Bessmertny Igor Alexandrovich, Lusiena A. Miroslavskaya, Koroleva Julia A.
102
The subject of study. The paper focuses on the extraction of semantics from texts in Arabic. In particular, the applicability of the Bell test to word pairs is investigated as a measure of the semantic words relatedness in a context. The study applies the quantum formalism to the task of information retrieval in Arabic texts and presents the results of this work. The authors also examine the influence of the context width on the effectiveness of information retrieval. Method. The research is based on the vector representation of the context. It uses the well-known approach based on the HAL (Hyperspace Analogue to Language) matrix and Bell test. The HAL matrix allows taking into account both the frequency of the words occurrence in the context and the distance to the target word. Quantum theory operates with probability density matrices. Quantum theory allows describing probabilities in the vector space in a more natural way  i.e., words can be represented as vectors. Main results. The results demonstrate that using the Bell’s test for texts in Arabic provides a better ranking of search results compared to the results of search services. Practical significance. The research results can be used in the development of the information retrieval systems, as well as for the further development of methods based on the distributive hypothesis.
METHODS OF COUNTERING SPEECH SYNTHESIS ATTACKS ON VOICE BIOMETRIC SYSTEMS IN BANKING
Kouznetsov Alexander Yu., Roman A. Murtazin, Ilnur M. Garipov, Anna V. Kholodenina, Vorobeva Alisa A.
109
The paper considers methods of countering speech synthesis attacks on voice biometric systems in banking. Voice biometrics security is a large-scale problem significantly raised over the past few years. Automatic speaker verification systems (ASV) are vulnerable to various types of spoofing attacks: impersonation, replay attacks, voice conversion, and speech synthesis attacks. Speech synthesis attacks are the most dangerous as the technologies of speech synthesis are developing rapidly (GAN, Unit selection, RNN, etc.). Anti-spoofing approaches can be based on searching for phase and tone frequency anomalies appearing during speech synthesis and on a preliminary knowledge of the acoustic differences of specific speech synthesizers. ASV security remains an unsolved problem, because there is no universal solution that does not depend on the speech synthesis methods used by the attacker. In this paper, we provide the analysis of existing speech synthesis technologies and the most promising attacks detection methods for banking and financial organizations. Identification features should include emotional state and cepstral characteristics of voice. It is necessary to adjust the user’s voiceprint regularly. Analyzed signal should not be too smooth and containing unnatural noises or sharp interruptions changes in the signal level. Analysis of speech intelligibility and semantics are also important. Dynamic passwords database should contain words that are difficult to synthesize and pronounce. The proposed approach could be used for design and development of authentication systems for banking and financial organizations resistant to speech synthesis attacks.

MODELING AND SIMULATION

SIMULATION OF PROPAGATION AND DIFFRACTION OF SHOCK WAVE IN PLANAR CURVILINEAR CHANNEL
Bulat Pavel V, Volkov Konstantin N., Anzhelika I. Melnikova
118
Subject of Research. Numerical simulation of a shock wave propagation in a plane curved channel is considered on the basis of numerical simulation data. Method. Calculations of an inviscid compressible gas were carried out on the basis of unsteady two-dimensional Euler equations. Discretization of the basic equations was carried out using the finite volume method. Calculations were carried out for different channels with different radius of curvature and Mach numbers of the initial wave. To find the angular position of the front at the current time, the absolute value of the derivative of the density with respect to the angular coordinate was used. The calculation results were compared with the data of a physical experiment. Main Results. The features of the emerging shock-wave flow pattern and its development in time are discussed. The shock-wave configuration observed in channels with different radii of curvature is compared. Some differences in the curvature change of the front of shock waves formed in channels with different radius of curvature are shown. The size of the Mach leg and its change with time depending on the intensity of the initial wave and the size of the annular gap is the angular coordinate function corresponding to the position of the shock wave at the current time. While the maximum Mach number on the outer wall is relatively weakly dependent on the initial wave velocity, the Mach number on the bottom wall decreases with increasing Mach number at the channel entrance. The performed numerical studies show that in all variants there are no non-physical oscillations of the solution. Practical Relevance. The study of shock-wave and detonation processes is of interest for using their potential in pulsed installations and power systems for aircraft and rockets. The calculation results are important for the search of the new flow patterns that guarantee the formation of self-sustained detonation combustion in the combustion chambers of promising propulsion systems. Adjusting the size of the annular gap gives the possibility to select a geometric configuration that will provide the formation of an optimal triple shock wave structure, as well as the required intensity and size of the Mach wave.
130
Subject of Research. The paper considers the problem of identifying the parameters of various robotic objects. A DC motor is used as an example. Existing methods for identification of parameters require either a large amount of time for accurate determination of the required values or give an estimate with a large error. We propose to expand the application area of the identification algorithm by the method of Dynamic Regressor Extension and Mixing for control problems of robotic objects with a DC motor. Method. The first stage of Dynamic Regressor Extension and Mixing method generates  to this procedure. Main Results. A new algorithm is proposed for identifying the parameters of DC motor models. It is shown that, when using the new approach, the fluctuations in parameter estimates are significantly lower, while the response time is much shorter. When using the gradient method, the transient time to estimate the signal parameters is 350 seconds, while for the Dynamic Regressor Extension and Mixing method this time does not exceed six seconds. Besides, Dynamic Regressor Extension and Mixing method has not got overshoot. Practical Relevance. The results of the work can be applied to the design of automatic control systems in control problems of electromechanical objects, including DC motors.new regression forms by applying a dynamic operator to the original regression data. Then, the required combination of new data is selected to obtain the final desired regression form. Standard parameter estimation methods are applied. 
FORECASTING THE SPRING FLOOD OF RIVERS WITH MACHINE LEARNING METHODS
Nikita I. Kulin, Evgeniy A. Kozlov, Zhuk Yulia A.
135
The subject of the research. The paper provides an overview of a flood forecasting problem in the Nenetsky region, Russia. The solution involves the use of the open source data on water level during the spring floods. Specifically, its collection, analysis and forecasting via machine learning models. Method. The authors describe a new forecasting approach that involves the use of the Holt-Winters model for a training sample, which is further implemented in order to train the following statistical models: XGBoost, Random Forest and Bagging. The solution is based on a sample of gauging stations’ historical indicators that provide a detailed description of weather conditions in the nearest settlements over several years. A separate sample was created for each location considered in the problem with the aim to build forecasts given a one-month or a one-year time period. Main Results. The forecast was obtained based on the results provided by individually trained models. In the future, the findings could be used when taking preventive measures during flood control. Practical relevance. Low maintenance costs of the information system along with the ability to predict the critical water level make this forecasting approach an economically viable additional measure against floods in poorer regions of Russia.

BRIEF PAPERS

FLEXIBILITY INSURANCE OF ROBOTIC TECHNOLOGY SYSTEMS FOR ASSEMBLING OF SMALL-SIZED PRODUCTS
Medunetskiy Viktor M. , Vitaliy Medunetskiy, Anton R. Solyanik, Ekaterina P. Iarysheva
143
The paper presents analysis of the features of robotic technological assembly lines and methods of their organization. The flexibility increasing of assembly robotic technological lines is currently provided mainly by the block-modular organization of technological lines and by various methods of technological equipment movement and structural transformation of manipulator links. For the assembly of products or their components from parts of complex configuration with various weight and size characteristics, it is recommended to use gripping devices of manipulators, which design gives the possibility to adapt the setting forces depending on the weight and dimensions of the gripped part. The growth of technological flexibility can be achieved through organizational, technical and design capabilities using a special technological assembly module of the carousel type, which main nodes are two turntables. The one turntable is designed for assembly operations. An example of a technological assembly module with three robots is given. Their interaction is carried out using computer control. In such a module, robots are arranged in a circle and the parts to be assembled are moved along a circle arc.
Copyright 2001-2021 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика