Summaries of the IssueAndrey V. TimofeevEgor V. Lukoyanov , Andelexar M. GruzlikovMaxim V. Buzdalov, Dmitry V. VinokurovAndrei N. IugansonAlisa O. Osadchaya , Ilia V. IsaevSergey V. Korzukhin, Rezeda R. Khaydarova, Vladislav N. ShmatkovRuslan A. Bobko , Sergey A. ChepinskiyVeronika S. Filina, Natalya N. Sevostyanova , Michael G. Danilovsky
PRODUCIBILITY ANALYSIS OF LENS SYSTEM DURING OPTICAL DESIGN STAGE
(in English)Livshits Irina Leonidovna, Oliver Faehnle
Subject of Research. The paper presents the idea of combining various stages of production of optical devices in a single logical sequence from the design of optical elements, through the mechanical and technological production stages, to the calculation of their manufacturing cost. This idea is all the more attractive because it is possible to control the entire process and save time and budget to decide on the most suitable production option already during the design stage. The information is important to be objective, related to the specific type and volume of production, and easily verified and controlled at the initial design stage. Method. The method consisted in combination of all stages for optical device creation on a “turnkey” basis, including the analysis and visualization of options for the device optical scheme, taking into account mechanical and technological aspects and calculation of the “project-product” cost, depending on the volume of production, with recommendations for its optimization. It is known that there are several alternative circuit solutions when designing optical elements, especially for image quality assurance approaching to the resolution diffraction limit: options for lenses containing only spherical surfaces or having different quantity of optical elements in the scheme, or lenses with non- spherical surfaces. At the design stage the choice is difficult. In this case, the decision is made taking into account the lens production technological processes. Main Results. The choice of the optimal lens optical scheme is performed. Evaluation of an optical device manufacturing possibility at the earliest stage is carried out, when the designed variants of its optical scheme, the manufacturing tolerances for optical elements and the volume of production are known. The manufacturing cost for optical elements of the given device for various variants of its optical scheme is determined. The study of alternative circuit solutions is carried out, for example, lens variants that contain only spherical surfaces or have a various number of optical elements in the scheme, or use non-spherical surfaces. At the design stage, the right choice is difficult. In the case presented in this paper, the solution is developed taking into account the technological processes of lens production. Aimed at this, a new software tool, called PanDao, has been applied providing a preview to producibility, fabrication technologies needed and production cost to be expected at the early design stage of optical systems. To illustrate the use of the PanDao software, two pinhole lens schemes have been developed and compared with a forward-facing input pupil that coincides with the aperture of the lens; the design of the first lens is consisted of the three spherical components, the second lens is a combination of four aspherical optical components. Practical Relevance. The possibility of manufacturability analysis for the lens system at the stage of optical design is shown, and determination of the optimal technological sequence of an optical device manufacturing is performed within the conditions of its production given volume. Modeling of the manufacturing process for various optical components gives the possibility to choose the optimal production chain and evaluate the need and cost of manufacturing, assembly and equipment testing. An additional advantage is the calculation of the device cost at an early design stage, which serves to optimize its optical scheme in some cases, and sometimes even avoid the prototyping stage. This approach is first implemented in PanDao software and is now available to a wide range of researchers.
EFFECT OF LASER PROCESSING PARAMETERS ON SPECTRAL CHARACTERISTICS OF SILVER-IMPREGNATED TITANIUM DIOXIDE THIN FILMSPavel V. Varlamov , Julia V. Mikhailova , Yaroslava M. Andreeva, Sergeev Maxim M.
Subject of Research. Local and precise control of nanocomposite material optical properties become possible due to lasers. Laser irradiation can be used as an instrument for the fabrication and modiﬁcation of such materials. However, for practical applications, it is necessary to know how laser processing parameters impact on spectral characteristics of composite materials which, as a rule, are related to sizes and distribution of nanoparticles. The paper presents research results of the laser processing parameters impact on the reﬂection spectra of nanocomposite material based on titanium dioxide. Methods. Sol-gel titanium dioxide thin ﬁlms impregnated with small (less than 5–7 nm) silver nanoparticles on glass slides were exposed to a 405 nm ultraviolet laser. Changes in the sample reﬂection spectra after laser processing in the continuous wave mode were studied via optical spectrophotometry in the range of 350–760 nm. Main Results. An array of laser tracks on the sample surface was recorded with such processing parameters as the scanning speed and average radiation power. Each track had central and edge areas which were visually observed. Experimental data analysis showed that there was a shift in the reﬂection spectra peak position in these two areas in the range of 380–440 nm. In order to determine the reasons for such spectral changes, numerical modeling was carried out using the effective medium model in the Bruggemann-Bergman approximation. It was found that the size and distribution of silver nanoparticles at the edges and in the center of the laser processed area may vary. The scanning speeding-up and the average radiation power decrease leads to an increase of nanoparticles size. These size changes occur due to various temperature distributions. Practical Relevance. Control methods for spectral characteristics of sol-gel silver-impregnated titanium dioxide thin ﬁlms via local laser resizing of nanoparticles are demonstrated. The obtained results are promising for a number of applications: integrated optics, photonics devices, biosensors, photocatalytic devices, and security labels.
OPTICAL MODULE DESIGN FOR AUGMENTED REALITY GLASSESAnastasiia A. Ivaniuk
Subject of Research. The paper considers an optical module design method for augmented reality glasses. The module contains a translucent beam-splitting element that provides observation of real objects with superimposed additional virtual image (OST HMD — optical see-through head-mounted display). The central element of the optical module is a prism that views two channels simultaneously: a real world picture and a virtual image. As a result, the user is able to see an augmented reality image. The functional scheme of the optical module with the introduced eye tracking system is considered. Method. Optimization of the prism surfaces, as well as tilts and relative positions, was performed using Zemax OpticStudio. It is based on the idea of applying free-form surfaces, which enables the sizes to be reduced, the ﬁeld of view to be increased and the image quality to be improved. Main Results. The initial parameters of the optical element and an algorithm for optimization of free-form surfaces are developed, that gives the possibility to obtain a relatively wide ﬁeld of vision (54° diagonally), compactness and high image quality parameters. Practical Relevance. The results of this work can be used in the design and development of augmented reality glasses in various ﬁelds, such as: medicine, online education, defense industry, sports, and marketing.
SEARCH QUALITY METHODOLOGY AND PARTICULAR FINDINGS FOR KEY POINTS BASED ON MATERIALS OF OPTICAL-ELECTRONIC AERIAL SURVEYAltukhov Alexander I., Vladimir I. Bilan , Grigor’ev Andrey N., Popovich Vasily V.
Subject of Research. This paper presents the ﬁndings of the Scale-Invariant Feature Transform method for key points search. The method is used for the problems of photogrammetric processing of terrain images obtained from aircrafts and satellites. Method. The chosen method is widely used for spatial linking of images, tracking of changes and searching for objects, building of digital models and terrain orthophotoplans. The relevance of the Scale-Invariant Feature Transform method analysis lies in the fact that it was originally developed as a universal method for image processing in the ﬁeld of technical vision. The existing modiﬁcations of this method, specialized for processing of terrain images, are applied in practice to a limited extent and have been studied without complete account of the image properties. In particular, the existing studies do not take into account the effect of the depicted plot on the key points search quality, which, in the general case, is characterized by a random combination of terrain objects. It is assumed that the plot features on the terrain image can cause signiﬁcant variations in the distribution of the selected points in the image of a separate exposure when applying of the key points search method. To determine the dependence of the search quality for key points on the depicted plot, it is necessary to develop a methodology based on the features analysis of the Scale-Invariant Feature Transform method implementation and the use of a reference image set with various plot composition. As a content analysis result of the Scale-Invariant Feature Transform method, the criteria and rejection parameters of the determined key points are deﬁned. The approach to the analysis of the image plot effect on the key points quality is based on the set of images classiﬁed by the characteristics of the plots on homogeneous and heterogeneous images. According to the proposed technique, the analysis is performed on the basis of the statistical and spatial distributions of key points obtained from individual images and their aggregates. Main Results. The research proposes a methodology for the dependence of the quality of the key points search result on the plot in the image. As a result of the experiment, factors are identiﬁed that cause a uniformity violation of the key points spatial distribution with the standard criterion for rejection of the key points. Practical Relevance. The results obtained make it possible to substantiate the need for development of a plot-oriented approach to terrain image processing by the key point search methods. The reason is that in order to perform and reﬁne the spatial image linking, it is required to ensure the location uniformity of the key points used as control or joining points. It is revealed that the location density violation of the key points can be determined by the uneven image quality over the frame ﬁeld. This phenomenon is associated, in particular, with different image sharpness in the central and peripheral zones.
MATERIAL SCIENCE AND NANOTECHNOLOGIES
ROUGHNESS STUDY OF PAPER MADE FROM SECONDARY RAW MATERIALS BY ATOMIC FORCE MICROSCOPYHalima A. Babakhanova , Zulﬁya K. Galimova , Mansur M. Abnunazarov , Ikromjon I. Ismoilov
The paper considers the issues of high-precision parameter control of the produced paper products with secondary raw materials as a component. A method of atomic force microscopy is proposed for paper roughness study. Visualization of the obtained topographic images of each type of paper surface under study was performed with the use of a Solver HV scanning probe microscope; the average roughness of the height differences for each type of paper was determined. The results were compared with the state standard requirements and international recommendations. It is shown that applying of scanning probe microscopy makes it possible to carry out parameter express control of the cellulose paper products during their production. With the roughness express analysis by atomic force microscopy it might become possible to control purposefully the technological process and create the new types of paper products with speciﬁed properties that provide graphic printing accuracy without loss of small image details.
METHOD FOR HYPERPARAMETER TUNING IN MACHINE LEARNING TASKS FOR STOCHASTIC OBJECTS CLASSIFICATION
Subject of Research. The paper presents a simple and practically effective solution for hyperparameter tuning in classiﬁcation problem by machine learning methods. The proposed method is applicable for any hyperparameters of the real type with the values which lie within the known real parametric compact. Method. A random sample (trial network) of small power is generated within the parametric compact, and the efﬁciency of hyperparameter tuning is calculated for each element according to a special criterion. The efﬁciency is estimated by the value of a real scalar, which does not depend on the classiﬁcation threshold. Thus, a regression sample is formed, the regressors of which are the random sets of hyperparameters from the parametric compact, and regression values are classiﬁcation efﬁciency indicator values corresponding to these sets. The nonparametric approximation of this regression is constructed on the basis of the formed data set. At the next stage the minimum value of the constructed approximation is determined for the regression function on the parametric compact by the Nelder-Mead optimization method. The arguments of the minimum regression value appear to be an approximate solution to the problem. Main Results. Unlike traditional approaches, the proposed approach is based on non-parametric approximation of the regression function: a set of hyperparameters – classiﬁcation efﬁciency index value. Particular attention is paid to the choice of the classiﬁcation quality criterion. Due to the use of the mentioned type approximation, it is possible to study the performance indicator behavior out of the trial grid values (“between” its nodes). As it follows from the experiments carried out on various databases, the proposed approach provides a signiﬁcant increase in the efﬁciency of hyperparameter tuning in comparison with the basic variants and at the same time maintains almost acceptable performance even for small values of the trial grid power. The novelty of the approach lies in the simultaneous use of non-parametric approximation for the regression function, which links the hyperparameter values with the corresponding values of the quality criterion, selection of the classiﬁcation quality criterion, and search method for the global extremum of this function. Practical Relevance. The proposed algorithm for hyperparameters tuning can be used in any systems built on the principles of machine learning, for example, in process control systems, biometric systems and machine vision systems.
HIERARCHICAL DIAGNOSTIC MODEL SYNTHESIS FOR DATAFLOW REAL-TIME COMPUTING SYSTEM
Subject of Research. The paper considers design issues for diagnostic tools of fault detection in addressing information exchanges between software modules for real-time dataﬂow computing systems. Despite the decomposition of the design processes in such systems, the issues of diagnostics and fault tolerance remain relevant for each hierarchy level. Method. The proposed synthesis procedures for the hierarchical model of a dataﬂow computing system are the result of the test diagnostics method development based on the parallel model application. Main Results. The paper presents a brief description of the test diagnostics method based on the parallel model. An algorithm for hierarchical diagnostics model synthesis is developed. The model minimizes the amount of diagnostic data transmitted through the exchange channels, reducing the redundancy level introduced into the system and thereby increasing the level of reliability. Practical Relevance. The developed hierarchical model reduces signiﬁcantly the design time for diagnostic tools as a result of reducing the required number of diagnostic modules included in it.
COMPARATIVE ANALYSIS OF METHODS FOR IMBALANCE ELIMINATION OF EMOTION CLASSES IN VIDEO DATA OF FACIAL EXPRESSIONSElena V. Ryumina, Karpov Alexey A
Subject of Research. The imbalance of classes in datasets has a negative impact on machine classiﬁcation systems used in applications of artiﬁcial intelligence, such as: medical diagnostics, fraud detection and risk management. This problem in facial expression datasets also degrades the performance of classiﬁcation algorithms. Method. The paper discusses the main approaches for the class imbalance reduction: resampling methods and setting the weight of classes depending on the number of samples observed for an each class. A histogram of oriented gradients is used for the face area localization in the frame stream, then an active shape model is applied, which detects the coordinates of 68 key facial landmarks. Using the coordinates of key landmarks, informative features are extracted that characterize the dynamics of facial expressions. Main Results. The results of the study have shown that the proposed approach to the extraction of visual features exceeds the accuracy of human emotion recognition by facial expressions. The considered methods of the class imbalance reduction in the set of facial expressions have provided the improvement of machine classiﬁer performance and showed that the existing class imbalance in a training set has a signiﬁcant effect on the accuracy. Practical Relevance. The proposed approach to the extraction of visual features can be used in automatic systems for human emotion recognition by facial expressions, and result analysis of applying methods that reduce class imbalance can be useful for researchers in the ﬁeld of machine learning.
CMSA/CA PROTOCOL ANALYSIS IN OMNET++ ENVIRONMENT WITH INET FRAMEWORKKhabarov Sergey P. , Maksim I. Dumov
Subject of Research. The paper presents the study of CSMA/CA access control protocol to a wireless data transmission medium by the OMNeT++ simulation environment using the INET framework. The protocol analysis is performed in the two modes: with conﬁrmation of the received packets and without it. Method. The method of simulation and analysis is used in carrying out research. The OMNeT++ environment generates statistical data and builds a time chart in the modeling process. The data obtained are analyzed; an explanation for the each step of the model behavior is given and, as a result, a general conclusion is drawn on the simulation result. Main Results. An approach to CSMA/CA protocol operation study is presented on the example of a wireless network simulation model with the “CsmaCaMac” module from the INET framework included in the structure of all its nodes. The possibility of this module integration is shown without signiﬁcant change in the node model. The main results of the analysis of statistical data and time charts obtained during simulation are presented, and the necessity of an access control protocol for the data transmission medium is proved. Practical Relevance. The considered approach can be used to develop and test new access control protocols for the data medium or to demonstrate the operation of existing protocols in the educational use.
METHOD OF ARTIFICIAL FITNESS LEVELS FOR DYNAMICS ANALYSIS OF EVOLUTIONARY ALGORITHMS
Subject of Research. Currently, in the theory of evolutionary computation, it becomes relevant to analyze not just the runtime of evolutionary algorithms, but also their dynamics. The two most common methods for dynamics analysis are: ﬁxed-budget analysis, which studies an algorithm reachable ﬁtness in condition of operation time limit, and ﬁxed- target analysis, which studies the time that an algorithm needs to reach some ﬁxed ﬁtness value. Until now, theoretical studies were systematically carried out only for the ﬁrst type of analysis. The present work is focused on removal of this disadvantage. Method. We proved the following theorem: if the bounds on optimization time for some evolutionary algorithm on some problem are already proven using artiﬁcial ﬁtness levels, than the bounds on this algorithm dynamics on the considered problem derive automatically from the same preconditions. Main Results. Using this theorem, we obtain the upper bounds on ﬁxed-target runtime for the following pairs of algorithms and problems: the family of (1 + 1) evolutionary algorithms on LeadingOnes and OneMax functions, and (μ + 1) evolutionary algorithm on OneMax. These bounds either repeat or reﬁne the existing results, but in a much simpler way. Practical Relevance. The main practical achievement of this paper is that it simpliﬁes the proving of bounds on the dynamics of evolutionary algorithms. In turn, these bounds could be more meaningful for choosing between different evolutionary algorithms for some problem than the time for reaching the optimum, as the latter is mostly infeasible in practice.
DETERMINATION OF PACKED AND ENCRYPTED DATA IN EMBEDDED SOFTWARE
Subject of Research. Embedded software research for security faults can be handicapped by various anti-debugging techniques (encryption) and code wrappers (compression). The paper presents an overview of existing tools for deﬁnition of anti-debugging techniques. The disadvantages of existing solutions lie in the use of signature-based methods for analysis of executable ﬁles, that limits the scope of their application to the number of the known signatures. The existing statistical tests based on the entropy analysis of ﬁles give an ambiguous result. To determine the data conversion technique, a method is proposed for detection of packed and encrypted data in an executable ﬁrmware ﬁle. Method. The embedded software is represented as a ﬁnite sequence of bytes, where each byte can take one of 256 possible values. The proposed method combines the approaches based on the use of Pearson’s chi-squared test to check the hypothesis of a uniform distribution of bytes in a ﬁle, as well as the use of the Monte Carlo method to approximate the number π in order to calculate the characteristics of the distribution of bytes in a ﬁle. The higher approximation accuracy of the number π and the closer the distribution of bytes in the ﬁle to a uniform one is, the more likely is the application of encryption algorithms for data transformation. Main Results. It is shown that the proposed criteria are more sensitive to deviations of a uniformly distributed random variable than the entropy analysis. Applying of these approaches to an experimental sample of ﬁles with various sizes, which were compressed/encrypted with a variety of algorithms, have shown correlations, that with a high degree of conﬁdence give the possibility to state which algorithm (compression or encryption) the embedded software was subjected to. Practical Relevance. An approach is presented for determination of packed and encrypted data obtained as a result of the use of various anti-debugging techniques. The proposed method is applicable both in the analysis of malicious software and in the search and identiﬁcation of security defects in embedded software.
SEARCH OF CLONES IN PROGRAM CODE
Subject of Research. The paper presents research of existing approaches and methods for the search of clones in the program code. As a result of the study, a method is developed that implements a semantic approach for the search of duplicated fragments focused on all kinds of clones. Method. The developed method is based on the analysis of the program dependency graph built from the source code ﬁles. To detect duplicate fragments, for each source code ﬁle dependency program graphs are generated with the nodes hashed on the basis of their content properties. Each pair of nodes is selected from each equivalence class, and two isomorphic subgraphs are identiﬁed that include a pair of nodes. If a pair of clones is included into another pair, it is removed from the set of the found pairs of duplicated fragments. A set of clones is generated from the pairs of duplicated fragments that share the same isomorphic subgraphs, that is, the pairs of clones are expanded. Main Results. To evaluate the efﬁciency of the developed method of searching for clones, the ﬁles have been compared for determination of the clone types that the system using this method detects, and the testing has been performed on the real system components. The results of the developed system have been compared to the real ones. Practical Relevance. The proposed algorithm makes it possible to automate the analysis of source ﬁles. Detecting of clones in the program code is a priority direction in code analysis, since the detection of duplicate fragments provides for the ﬁght against unscrupulous copying of program code.
CONFIGURABLE IOT DEVICES BASED ON ESP8266 SOC SYSTEM AND MQTT PROTOCOL
Subject of Research. The paper considers popular application layer protocols for the Internet of Things devices used in TCP/IP networks. Comparative analysis of these protocols is performed in the context of network resources and reliable data transmission application. Drawbacks and beneﬁts of these protocols for the data transfer inside the Internet of Things networks are identiﬁed. Review of hardware platforms for the Internet of Things devices is performed. SoC systems which combine a processor, peripherals and networking module on one semiconductor chip can have signiﬁcant practical value. Method. An approach for the creation of the conﬁgurable IoT device using ESP8266 SoC system is proposed. MQTT protocol is used for connectivity with a management server and data collection which can save network bandwidth and arrange logically IoT devices. Simple IoT architecture based on MQTT protocol and OpenHAB and Eclipse Mosquito software is proposed for combination of IoT devices in a network. The advantage of the proposed approach is the use of IoT device application templates. Main Results. Template applications for web-conﬁgurable sensor and actuator are created. Access point mode for initial device setup is implemented. Parameters of these devices related to the MQTT message response time are measured. Dependencies of sending and receiving time from MQTT message are obtained depending on its length. Network response time, Transmission Control Protocol packet loss rate, and MQTT message loss rate are measured. Practical Relevance. The following IoT devices have been built based on the mentioned template applications: smart light, motorized curtains, light, gas, temperature, pressure and humidity sensors. The parameters of the received devices, which characterize the message processing time, have been measured. Demonstration stand combining developed devices has been built. The approach used in this work provides rapidly creation of a big variety of IoT devices built on IoT device application templates. The proposed approach also gives the possibility to build simple IoT devices with acceptable operating parameters.
NOISE IMMUNITY OF WIRELESS PERSONAL AREA NETWORKS UNDER DIGITAL PRODUCTION CONDITIONSAfanasiev Maksim Ya , Fedosov Yury V , Krylova Anastasiya A., Shorokhov Sergey A., Kseniia V. Zimenko
Subject of Research. The paper considers the effect of working-environment factors on wireless personal networks. A classiﬁcation of such factors is given and the noise immunity of wireless personal networks is determined on a real production example. Method. A method for noise immunity evaluation was proposed based on a received signal strength indicator (RSSI). RSSI values can be obtained natively from almost any receiver and transmitter, that makes this method affordable compared to application of network analyzers and other specialized equipment. In the carried out experiment the receiver and transmitter were located at a distance ranging from 0.5 to 25 m. The act of signal transfer was carried out alternately under the impact of each working-environment factor. Then the measured RSSI values were analyzed and converted into the maximum permissible distance between the receiver and the transmitter in accordance with the proposed method. Main Results. Data on the working-environment effect on the noise immunity of wireless personal networks is obtained. The most signiﬁcant factors are: girded thick-walled steel obstacles, welding machines and similar frequency range networks. Nevertheless, it is concluded that the effect is not signiﬁcant enough to decide against the application of wireless personal networks, since the exposure of many factors can be offset by the use of mesh topology and dense arrangement of receivers and transmitters. Practical Relevance. The results are of particular interest in the context of production digitization, where the wireless method of data transmission from the ﬁeld level sensors becomes preferable to the wired one due to the requirement for ﬂexibility and mobility of a production process.
DISTRIBUTED CONVOLUTIONAL NEURAL NETWORK MODEL ON RESOURCE-CONSTRAINED CLUSTERRezeda R. Khaydarova, Mouromtsev Dmitry I, Maxim V. Lapaev, Fishenko Vladislav D.
Subject of Research. The paper presents the distributed deep learning particularly convolutional neural network problem for resource-constrained devices. General architecture of convolutional neural network and its speciﬁcity is considered, existing constraints that appear while the deployment process on such architectures as LeNet, AlexNet, VGG-16/VGG-19 are analyzed. Deployment of convolutional neural network for resource-constrained devices is still a challenging task, as there are no existing and widely-used solutions. Method. The method for distribution of feature maps into smaller pieces is proposed, where each part is a determined problem. General distribution model for overlapped tasks within the scheduler is presented. Main Results. Distributed convolutional neural network model for a resource-constrained cluster and a scheduler for overlapped tasks is developed while carrying out computations mostly on a convolutional layer since this layer is one of the most resource-intensive, containing a large number of hyperparameters. Practical Relevance. Development of distributed convolutional neural network based on proposed methods provides the deployment of the convolutional neural network on a cluster that consists of 24 RockPro64 single board computers performing tasks related to machine vision, natural language processing, and prediction and is applicable in edge computing.
TRAFFIC AUTHENTICITY ANALYSIS BASED ON DIGITAL FINGERPRINT DATA OF NETWORK PROTOCOL IMPLEMENTATIONSIshkuvatov Sergei M. , Igor I. Komarov
Subject of Research. The problem of trafﬁc authenticity determination based on digital ﬁngerprint data of network protocol implementations is considered. Description methods for digital prints of network protocols and characteristic changes in the original digital prints during transmission over various communication channels are studied. The applicability of anonymization tools, detection of Man-in-the-Middle Attacks, and malware based on the digital ﬁngerprint analysis of protocol implementations is researched. Ways of record format improvement for digital prints with the view to avoid collisions of prints are proposed. Method. Features of each implementation of an existing or potentially possible information transfer protocol can be described by a digital ﬁngerprint of this implementation and identiﬁed by the receiving party. Communication equipment on the information transmission path may be forced to change some of the initial parameters due to its internal limitations or limitations of the transmitting environment. The receiving party identiﬁes the current implementation of the transmitting party’s protocol, based on pre-prepared lists of digital ﬁngerprints, taking into account the permissible characteristic changes by nodes along the path of transmitted data. Comparing the original digital ﬁngerprint with the ﬁngerprint received by the server for certain sets of parameters, the receiving party makes assumptions about the methods of data transmission, the client’s use of anonymization tools, or third-party intervention in the transmission process. Based on the information obtained as a result of comparing digital ﬁngerprints, it takes a decision about the possibility of communication sessions with the current sender. Within all communication sessions with the current sender, the recipient controls the immutability of the original digital ﬁngerprint of the protocol by active and passive methods. Main Results. In the course of the study, network connection methods, anonymization tools, and connection from a potentially dangerous implementation are determined on the example of mitmproxy. Practical Relevance. Digital ﬁngerprint automated analysis of network protocol client implementations provides the detection of incoming connections of malicious applications, network robots, and conﬁrmation facts about the client’s applying of anonymization tools. Detection of malicious implementations by their digital ﬁngerprints is possible not only on the receiving side, but on the entire network section along the path of packets, and therefore, blocks such connections at the network border.
PROCESS CHARACTERISTICS ESTIMATION IN WEB APPLICATIONS USING K-MEANS CLUSTERINGVictor V. Evstratov , Mikhail S. Ananyevskiy
Subject of Research. The paper presents the study of estimation problem of process characteristics for the particular case of user’s activity prediction in computer online games. Various machine learning methods are considered, and the advantages of clustering-based approaches are identified. The variety of metrics for the estimation of clustering quality is studied. Method. A clustering-based approach to estimation of process characteristics was developed on the base of a hypothesis proposed during the preliminary analysis of user’s activity data. Data on activity of users with the known predicted values was collected. Each user was represented as a pair of vectors: the first vector corresponded to his first days of activity, and the second one corresponded to the days with predicted performance. The vectors representing user’s activity in the first days were used as training data for the K-means algorithm. A developed entropy-like loss function was used to find a value of K suitable for the problem under consideration. The clusters were matched with vectors of predicted process characteristics averaged over all users in the cluster. These matches were used as the prediction of new users’ characteristics. Main Results. An approach to the determination of the suitable number of clusters is proposed, taking into account the specifics of the considered data. Numerical experiment is carried out, demonstrating the applicability of the developed method. Practical Relevance. The proposed approach application allows for the simultaneous prediction of multiple characteristics of online-game users, and, therefore, for solution of various planning and analytics problems during online-game development. For example, the method developed in the present work was used to analyze the development payback of new game elements, and to predict server load in order to increase available computational resources beforehand. The advantages of the developed method include no need for expert tagging of the training set and relatively low computational cost due to the low computational complexity of the proposed loss function used to estimate the hyperparameter K.
MODELING AND SIMULATION
MULTILINE BRAILLE DISPLAY CONSTRUCTION MODEL
The paper describes the problem of a multiline Braille display construction design. The bipolar two-phase step motors with planetary gear box and helical gear are used as actuators. Unlike widely used analogues, the mechanism model has a remarkable constructive feature, providing the use of a smaller number of actuators than the number of actuated Braille dots. It is achieved by means of ﬁxing each dot in certain position by a mechanical element (ﬁxator). Even if power supply is interrupted, the information displayed does not disappear. The actuators require power supply only when changing the dot state, that gives the possibility to reduce electricity consumption. The proposed display model differs from the existing ones by the ability to display more than one line simultaneously (any given number of lines). The page is updated line by line. At this, each dot is ﬁxed mechanically due to the use of multi-channel communication system. Three-dimensional model of the multi-line Braille display is designed.
APPLICATION OF LASER RADIATION FOR PLANT GROWTH STIMULATION
Subject of Research. The paper describes the laser stimulation technology for kohlrabi cabbage in the climate of North-West of Russia. The genetic potential of plants is activated as a result of exposure to laser radiation. Method. The method was based on the exciting effect of laser radiation with a wavelength of 650 nm on the plant phytochromes. Due to irradiation, protein and carbohydrate synthesis was accelerated, which leads to an increase in yield. Irradiation was performed at night by a semiconductor laser with a wavelength of 650 nm, a radiation power of 150 mW, and a radiation exposure of 30 seconds. Main Results. The total amount of protein in the collected kohlrabi stem crops in the experimental group samples was higher by 6 % than the corresponding indicators of the control group samples, carbohydrates — by 27 %, and the average weight of the stem crops — by 30 %. Practical Relevance. The proposed technology reduces the use of chemical agents for stimulation of plants growth and protection and, therefore, increases the proﬁtability of crop production and improves its quality.
RISK IDENTIFICATION OF SECURITY INFORMATION VIOLATIONS IN CYBER-PHYSICAL SYSTEMS BASED ON ANALYSIS OF DIGITAL SIGNALS
Semenov Victor V. , Arustamov Sergei A
Subject of Research. The paper presents an approach to the analysis of digital signal sequences related to cyber-physical systems functioning. The proposed solution combines a set of machine learning methods for analyzing heterogeneous external data of digital signals coming from various system sensors. Method. The methods based on artiﬁcial neural networks and the k-nearest neighbors algorithm were studied for the analysis of digital signals. Main Results. The proposed approach has been tested using the signals received from a digital three-axis accelerometer located on an unmanned vehicle prototype. The processing of digital signals by the methods under study has been carried out in the MATLAB R2020a environment. The accuracy of the researched methods has been compared and, as a result, the k-nearest neighbors algorithm reached the value of 96.1 %, whereas artiﬁcial neural networks showed the result of 95.0 %. Practical Relevance. The proposed approach makes it possible to detect the risks of information security violations of the cyber-physical systems with acceptable accuracy and can be used in systems for the state monitoring of objects.