Menu
Publications
2025
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
Editor-in-Chief

Nikiforov
Vladimir O.
D.Sc., Prof.
Partners
Summaries of the Issue
OPTICAL ENGINEERING
179
The phase transformations occurring in scale when exposed to nanosecond laser pulses are investigated. The initial phase composition of the scale and the phase composition of the surface layer exposed to the laser have been determined. The surface treatment of the samples was carried out in the evaporative mode of laser exposure and led to scale ablation. Two groups of samples from hot-rolled carbon steel sheets of the St3 grade (E235, Fe 360) were studied. The first group consisted of samples with an initial scale surface and samples with a mechanically ground surface. Based on these samples, the phase and elemental composition as well as the morphological parameters of the initial scale were studied. The second group includes samples with a scale surface treated with nanosecond laser pulses. A pulsed nanosecond ytterbium fiber laser with a maximum average power of 30 watts was used. A two-coordinate scanning system based on electroplating scanners was used to scan the surface of the samples with a laser beam. The phase composition of the scale was determined by RAMAN spectroscopy. The morphological parameters of the surface and the elemental composition of the samples were determined by scanning electron microscopy (SEM), atomic force microscopy (AFM), and energy dispersion analysis (EDX). Studies of the phase composition of the initial scale showed that it consists mainly of magnetite, while wustite was not detected in the scale. It has been established that during the processing of scale in the evaporation mode a crater is formed in the area affected by the laser pulse the surface of which is covered with a solidified melt of scale. A phase transformation occurs in the melt with the formation of wustite. Upon solidification, the melt cracks, which is associated with the phase transformation that has occurred. Thus, it is shown that in the process of laser purification, the evaporative mechanism of scale removal is accompanied by a phase transformation of a mixture of magnetite and metallic iron into wustite. The results obtained can be used as a basis for the creation of a new highly efficient technology for laser surface cleaning of steels from scale.
190
The heating characteristics of a lead selenide (PbSe) film under continuous laser radiation were investigated, accounting for the nucleation and increase in the thickness of an oxide phase layer. It is shown that oxidation of the PbSe film reduces the heating rate and lowers the maximum temperature due to a decrease in the fraction of absorbed laser radiation within the oxide phase. The modeling results presented in this work substantiate earlier experimental findings. For the first time, the explanation of the laser heating mechanism of the PbSe film enabled determining the most effective laser exposure duration to form structures with specified optical characteristics. The study was conducted using analytical modeling. A particular solution of the heat conduction equation was employed to describe the heat source. The optical properties of the film were characterized using Fresnel equations for light reflection and transmission. Based on previously obtained experimental data, an analytical model was developed to describe the heat source in the film, considering changes in its optical properties due to the formation of a lead selenide oxide layer and its increasing thickness. The findings show that, when the PbSe film is exposed to continuous laser radiation with a wavelength of 405 nm, the extinction coefficient of the film, kf, decreases from 0.488 to 1.62∙10–3 due to the formation of an oxide layer. In this case, the refractive index of the film, nf, similarly decreases from 3.532 to 1.925. The film absorption coefficient at the laser wavelength decreases from 0.68 to 0.03 during irradiation. As the thickness of the oxide phase increases from 0 to 600 nm, the temperature growth in the irradiated zone slows down, and the maximum temperature shifts from the surface toward the film-substrate interface. When exposed to continuous laser radiation with a power density of about 340 W/cm2 for 9 s, the maximum film temperature does not exceed 275 °C. The obtained results can be applied in the development of mid-infrared spectrum photo detectors based on PbSe film. Laser annealing of the film allows local and controlled changes in the optical and electrical properties of the PbSe film within a narrow range of values, thereby influencing the photo sensitivity of the film used as a detector for mid- and far-infrared radiation.
199
Currently, the development of new nanocomposite materials with improved photocatalytic and antibacterial properties is a topical task for environmentally friendly technologies for water and air purification. This paper presents the results of a study of ZnO-ZnCr2O4 and Cu/ZnO-ZnCr2O4 powder nanocomposites obtained by the polymer-salt method. For the synthesis of nanocomposites, zinc and chromium nitrate solutions with the addition of polyvinylpyrrolidone as a soluble organic polymer were used. The structure and morphology of the nanocomposites were studied by XRD analysis and electron microscopy, optical and luminescent properties - using spectroscopic methods. As a result of heat treatment at 550 °С, dispersed powders of nanocomposites were obtained, consisting of particles several micrometers in size, including hexagonal ZnO nanocrystals with an average size about16 nm and ZnCr2O4 spinel crystals. In the luminescence spectrum of the Cu/ZnO-ZnCr2O4 composite in the visible region, fluorescence bands are observed characteristic of ZnCr2O4 crystals and structural defects of ZnO crystals. It was found that the intensity of singlet oxygen photogeneration by the Cu/ZnO-ZnCr2O4 nanocomposite linearly depends on the power density of the exciting radiation (the wavelength is 405 nm). Antibacterial activity of the Cu/ZnO-ZnCr2O4 nanocomposite against Staphylococcus aureus ATCC 209P bacteria was also revealed. The obtained nanocomposite powders can be used in water and air purification and disinfection systems.
212
Laser-processing technology has advanced precision surface material processing, but challenges remain in maintaining the laser beam waist position on uneven surfaces. Surface irregularities cause defocus and non-perpendicular alignment leading to distortions in beam spot size and shape, which reduce processing quality. This study develops a mathematical model and simulation framework to analyze beam waist positioning errors during surface processing. Using MATLAB Partial Differential Equation (PDE) and finite element method, the simulation evaluates how variables like laser incidence angle and focal distance affect beam spot characteristics. Results reveal that defocus and misalignment enlarge and distort the laser beam spot, with higher incidence angles causing elliptical deformation. The simulation is critical in advancing the understanding of laser-material interactions under suboptimal conditions such as defocus and misalignment. It provides critical insights into the geometrical of laser beam, enabling the development of precise error detection methods for beam spot irregularities. Furthermore, these findings lay the groundwork for designing adaptive mechanisms that enhance the precision and reliability of laser-based surface material processing, addressing challenges posed by uneven workpiece surfaces. This approach aims to optimize laser processing quality and expand its applicability in high-precision manufacturing.
222
This paper presents theoretical and experimental research on improving the sensitivity of a refractive fiber-optic sensor operating on the principle of surface plasmon resonance (SPR). The sensitive element consists of a multimode singlemode-multimode (MMF-SMF-MMF) fiber structure. To induce the SPR effect, the single-mode fiber section is sequentially coated with metal (Cu) and dielectric layers (Al2O3, TiO2), which results in narrower resonance peaks. This enhances wavelength shift detection and increases sensor sensitivity. Mathematical modeling of the sensitive element with a multilayer surface structure was conducted using characteristic matrices. Each layer of the sensitive element was individually characterized, followed by the formulation of the overall characteristic matrix to calculate the transmission coefficient. Based on the simulation results, optimal dielectric coatings and layer thicknesses were selected to achieve the narrowest resonance peak. To validate the simulation findings, sensitive element samples with dielectric coatings of Al2O3 (60 nm and 100 nm) and TiO2 (50 nm and 100 nm) were fabricated. Transmission spectra were obtained in air, water, and ethanol. The results demonstrate that the proposed coating increases the sensitivity of the fiber-optic SPR sensor threefold compared to an uncoated sensitive element. The proposed approach not only enhances sensor sensitivity but also shifts the resonance peaks into the infrared spectral region. Additionally, the study highlights the feasibility of using more accessible fiber-optic components for the investigated sensor system.
MATERIAL SCIENCE AND NANOTECHNOLOGIES
229
The article presents the results of computer modeling of melting and crystallization processes of spherical gold nanoparticles. The molecular dynamics method was used to analyze the thermodynamic characteristics (temperature, heat, entropy of melting, and crystallization) of nanoparticles for different heating and cooling rates. Such studies allow choosing the most suitable temperature ranges for the formation of nanocrystalline structures and predicting their sizes. Reducing the size of gold nanoparticles (less than 100 nm) leads to a significant increase in the ratio of the surface area to the volume of particles, as a result of which the physical and chemical characteristics of the material change significantly compared to materials of the same material in bulk form. Interest in gold nanoparticles is due to enhanced photoemission, high electrical and thermal conductivity, and increased catalytic activity of the surface. Gold nanoparticles have strong optical absorption and scattering properties in the visible region of the spectrum due to surface plasmon oscillations of free electrons. In the known studies from the literature, it was found that with an increase in the size of nanoparticles, the hysteresis between the melting and crystallization temperatures increases, while in theory the macroscopic melting and crystallization temperatures should be the same. The novelty of the study presented in this paper is to reveal a previously unobserved tendency for macroscopic temperatures, heat and entropy of melting and crystallization to converge with a decrease in the heating and cooling rates. The classical molecular dynamics method was used to study the thermodynamic properties of gold nanoparticles. The subject of modeling was gold nanoparticles of various sizes, spherical in shape with a face-centered cubic lattice. In the modeling process, the interatomic potential was used corresponding to the embedded atom method which was developed for gold, using an improved force matching methodology. Heating and cooling of nanoparticles were modeled at temperature change rates of 0.1 TK/s, 1 TK/s, 3 TK/s. By analyzing the relationship between the potential energy of gold nanoparticles and temperature, the dependences of the melting and crystallization temperatures of nanoparticles on their size were revealed. A relationship was established between the size of nanoparticles, heat, entropy of melting and crystallization at different heating and cooling rates. It was shown that with a decrease in the heating and cooling rates from 3 TK/s to 0.1 TK/s, there is a convergence of the macroscopic values of the melting and crystallization temperatures (a decrease in the difference from 467 K to 158 K), macroscopic values of the heat of melting and crystallization (from 4.24 kJ/mol to 0.67 kJ/mol), entropy of melting and crystallization (from 1.99 J/(mol·K) to 0.16 J/(mol·K)). It was assumed that this is due to a decrease in the proportion of the formed nanostructures other than face-centered cubic ones. Prediction of temperature regimes of melting and crystallization of gold nanoparticles allows controlling phase transitions in the production of nanocrystals with specified properties. This phenomenon can find application in microelectronics for the formation of thin films with a high degree of homogeneity and in catalysis for the formation of nanoparticles with the required structures and properties. The obtained data can be used to verify the results of real experiments on phase transitions and to adjust molecular dynamics models.
236
The technological processes of fabrication inertial elements of devices of microelectromechanical systems are investigated. The influence of etch square on critical parameters of the process of deep reactive ion etching, allowing to etch silicon structures with high aspect ratios for fabrication micromechanical accelerometers and gyroscopes, is studied. Inertial sensitive elements of micromechanical accelerometers were manufactured on a 150 mm wafer diameter within the framework of an advanced technological process with minimized square etch area on stage of formation of the device layer consisting of an inertial mass, an elastic suspension, control and measuring electrodes, and insulating frame. Values of geometric parameters of silicon structural layers of the device were obtained by analyzing the profiles of inertial visible elements on a scanning electron microscope. Elements of device layer were studied both in the radial and tangential directions of a substrate with a diameter of 150 mm to determine the spread of geometric parameters of inertial sensitive elements. The technological process of fabrication inertial sensitive elements to reduce square of etch area at the stage of device layer formation using an alternative opening the area of the contacts is shown. Based on measurements of the geometric parameters of the silicon structures of the device layer, it was found that the dimensions of the elements and their deviations change in the radial direction from the center of the substrate to the edge. The spread of the geometric parameters of the silicon structures of inertial sensitive elements manufactured according to the advanced technological process on a 150 mm diameter substrate was reduced to 0.4 μm, and the spread of their deviations was reduced to 0.2 μm. The proposed technological process can be used to increase the yield of devices goods during manufacture of inertial sensitive elements and to obtain more uniform characteristics of microelectromechanical systems, such as accelerometers and gyroscopes. The work results can be used in the design of technological processes for the manufacture of new inertial sensitive elements.
AUTOMATIC CONTROL AND ROBOTICS
243
The paper addresses the problem of adaptive frequencies estimation for multisinusoidal Time-Varying (TV) parameter of a discrete linear system of the first order. It is assumed that the amplitudes, frequencies, and phases of the harmonics in the TV parameter are unknown, however the number of harmonics is known. The novelty of the proposed approach consists in the fact that the frequencies identification is possible even if the system output crosses zero when the information about the TV parameter is inaccessible. In this case, when the proposed solution is used in a problem of adaptive control of the system considered, the frequencies identification and the work of TV parameter observer are independent, what increases the rate and precision of controller parameters tuning. The problem is solved by transformation of the plant model into a regression model linear with respect to unknown frequencies and used for design of identification algorithms. In the paper, two identification algorithms are applied. The first one is the standard gradient algorithm, while the second one is the algorithm with improved parametric convergence achieved by accumulation of regressor over past period of time and referred to as algorithm with memory regressor extension. The problem of control is solved with the use of: certainty equivalence principle; internal model principle according to which the TV parameter is represented as the output of dynamic autonomous model (exosystem) and involving of this model into the structure of the control law; observer of the exosystem state; one of the proposed frequencies identifier; and a formula of recalculation of the frequencies estimates into the controller adjustable parameters. A procedure of transformation of the TV system into a regression linear with respect to unknown frequencies used for design of identification algorithms is represented. The obtained solution is applied to the problem of indirect (identification-based) adaptive control of the TV system considered in the paper. The main distinguishing feature of the solution proposed consists in independence of the obtained identifiers from the observation property of the TV parameter what increases the transient performance and precision of the indirect adaptive control algorithms designed for the considered class of TV systems. The proposed solution can be used in problems of control of parametric resonance systems.
COMPUTER SCIENCE
253
The article considers the problem of creating a file system with characteristics different from universal ones for storing data of intelligent video surveillance systems. Access to the file system is a determining factor on which the performance of the entire system depends. A fast data bus and a modern processor do not always determine the speed of data operations, but also the hard disk access driver which, accordingly, can limit the system ability to perform basic functions: surveillance, image analysis, detection of images and events. It is necessary to select a more productive server which is expensive, or use a specialized driver to increase the speed of writing and reading on the hard disk. The use of a specialized file system focused on solving one or a limited number of problems can significantly increase the speed of systems in cases where the server is used with the same technical characteristics. In intelligent video surveillance systems, the use of a specialized file system can provide an increase in the speed of image processing and the accuracy of object detection in the video stream, due to the increased speed of reading and writing from the disk. An analysis of existing file systems has shown that the existing solutions do not provide the required speed of working with data in intelligent video surveillance systems when using technical means with the same computing characteristics. In this article, the authors propose a specialized file system for storing data in intelligent video surveillance systems. A file system has been developed that is focused on solving one problem: storing data in intelligent surveillance systems. The developed driver increases the speed of accessing the data on the hard drive. The new file system for storing data in an intelligent video surveillance system works together with a database for one, separate task. A comparison of the speed of writing and reading data using the developed driver and using existing universal drivers made. As a result of the comparison, it has been established that the use of the new driver has increased the speed of writing and reading by 43.4 % relative to NTFS file system. As part of the study, a file system for intelligent video surveillance systems was developed, but similar specialized file systems can be developed for use in other areas where it is necessary to increase the speed (reduce the time) of writing and reading data from the file system.
261
Clustering is one of the fundamental approaches for data mining, which in the field of geoinformatics and image processing is used to search for knowledge and hidden patterns of spatial information. During automatic vectorization of objects on satellite images due to imperfections of these technologies, missing elements appear on linear and polygonal objects, which prevent full-fledged data analysis and visualization. The paper considers the problem of clustering geometric primitives with implicit polygonal structure with the possibility of eliminating incomplete data in vector models. The proposed method is based on the iterative formation of spatial structures by stretching the original linear objects. Unlike many clustering approaches, elements are grouped into clusters not by the principle of nearest Euclidean distance, but by determining the nearest intersection between segments. This approach allows us correctly dividing adjacent objects into different clusters. For the spatial structures formed at each iteration, their topological features are calculated depending on the stretching coefficient, which makes it possible to detect and filter implicit polygonal structures. The developed method is tested for clustering of linear geometric primitives on vector models of urban infrastructure. The performance of the method is compared with its competitors: k-means, DBSCAN, and agglomerative clustering. The research has shown that using the clustering quality assessment metric in the form of inertia and Jaccard indices, the proposed method has an advantage due to the correct separation of closely located clusters.
273
The observable world is hierarchically structured. The interaction technique of counter motion of information flows in the hierarchically organized systems was named “the adaptive resonance” and successfully modeled in artificial neural network of olfactory stimuli analysis and then applied for image recognition. The usefulness of application of the principles of hierarchical structural analysis and adaptive resonance was then forgotten for a long time. Recently, these principles were applied again in the capsule neural networks that outperformed the best modern models of other neural networks. This shows the necessity of systematizing the ways of practical implementation of these principles. The experience of application of structural analysis and adaptive resonance in the tasks of image recognition by artificial neural networks was inspected through the scientific and technical materials published last half-century. The comparative analysis carried out confirmed the efficiency of application of these principles in automatic image processing. The methods of most efficient realization of structural analysis and adaptive resonance in artificial neural networks were also determined. While the successful results of image recognition were reached by convolutional neural networks, their developers consigned to oblivion the principles of structural analysis and adaptive resonance following from the organization peculiarities of observable environment. However, the come-back to application of these principles would result in additional success in solving the tasks of image processing; thus, the further investigation of artificial neural networks is worth to be carried out in this area.
286
When transmitting information over channels with grouping errors, the traditional approach is channel decorrelation and use of codes correcting independent errors. The decorrelation procedure lowers achievable rates of reliable transmission, therefore the problem of using special codes for channels with memory and construction of computationally effective decoding methods for correction of grouping errors is actual. For the class of random codes, an approach is known using information sets of limited diameter to correct error bursts. The size of the set of information sets grows linearly with increasing code length, and the construction of the set is described by a probabilistic procedure. This article considers the construction of a set of information sets for a special class of codes that correct error bursts called Gilbert codes. The sets of code positions of the smallest possible diameter are considered. Based on the calculation of the ranks of the submatrices of the parity-check matrix of the Gilbert code, the probability that the set of positions is an information set is estimated. For a given location of the information set, the positions of the corrected bursts are analyzed. Based on the analysis, a method for constructing a set of dense information sets for Gilbert codes for correcting all error bursts within the code correcting capacity is proposed. Using the features of setting the parameters of Gilbert codes, an estimate of the size of the resulting set of dense information sets is carried out. For a simple block size of the parity check matrix of a quasi-cyclic code, it is shown that for Gilbert codes a dense information set is located at any position. In the case of extended Gilbert codes, it is shown that sets of minimum diameter exist only at the last position of each block. A procedure for constructing a set of dense information sets of minimum diameter for Gilbert codes and their extensions is proposed. A comparison is made of the set size of information sets and the probability of obtaining it for Gilbert codes and random codes. It is shown that the number of information sets obtained by the proposed procedure does not increase with the length of the code. The results obtained in the paper demonstrate the possibility of developing computationally efficient decoders based on information sets when correcting single error bursts. Unlike random linear codes, for which the methods of constructing information sets including dense ones, are probabilistic, a procedure for guaranteed construction of a set of information sets of minimal diameter is specified for Gilbert codes. The quasi-cyclic structure of Gilbert codes allows constructing sets of dense information sets of smaller dimension than for random codes. The obtained results allow us to guarantee the correction of error bursts within the correcting capacity of Gilbert codes and their extensions with low computational complexity. The use of computationally efficient procedures for encoding and decoding error bursts will improve the reliability of message delivery in channels with memory.
295
The automation of action recognition in laboratory animals is a crucial step in simplifying behavioral tests in the fields of pathophysiology and rehabilitation research. The most common method of action recognition is to analyze the trajectories of key skeletal points. However, the existing methods are strongly tied to the specific animal species, selected skeletal points, and set of activities to be recognized. Furthermore, there is a dearth of mathematical formulations of this problem and research on algorithms for filtering obtained trajectories. The research task involves the collection of a dataset for key points detection of Wistar rats and evaluation of algorithms for filtering trajectories from noisy measurements. In considered skeletal model of the rat, a total of thirteen points were selected for the purpose of estimating the behavior along trajectories. A mathematical description of the dynamics of point movement between frames for use in a Kalman filter is provided. Four filtering algorithms are evaluated in terms of accuracy and curve smoothness. The technique of constructing the covariance matrix of the detector noise by analyzing the key point detection errors is developed. The comparison of filtering algorithms shows that the Unscented Kalman filter with nonlinear model and moving average filter yield the most optimal results in this task. The findings of this study allow the use of a mathematical description of system dynamics to estimate the actual trajectory from noisy measurements. Furthermore, the described methodologies are not exclusive to laboratory animals, but can also be applied to human subjects.
303
Methods for comparing three-dimensional images in problems of geological modeling of a reservoir are studied in order to improve their quality. The proposed method combines such advantages as global representation of shape, invariance to transformations, noise resistance, and computational efficiency. An approach based on the use of image moments for analyzing geological data in problems of geological modeling of a reservoir is developed and substantiated. The problem of comparing three-dimensional images is solved using the mathematical apparatus of algebraic invariants. The essence of the proposed approach is to calculate the moments of three-dimensional images for comparing the invariants of the contours of the standard and sample. The result of the comparison is a quantitative metric of the conformity of the compared contour to the desired standard. Developed software tools were built into the overall modeling and analysis pipeline of the Gempy package. The method showed satisfactory results on the test geological model. The recognition accuracy allows using the developed tools as a recommender system. The possibility of using the proposed method to search for a given object in a geological model and limited applicability for verifying a simplified model during iterative calculations are confirmed. The proposed method is compared with the Hausdorff metric, the cross-section comparison method, the SIFT and SURF methods, and the grid interpolation method. It is shown that the proposed method can be expanded to more complex geological formations for working with heterogeneous structures. The developed tools can be integrated with geological modeling systems, database management systems, and analytical platforms.
311
The article discusses how to use the Russian profile of the Fast Healthcare Interoperability Resources (FHIR) RU-core protocol for medical information systems developing. An enhanced qualified electronic signature has been used for information protection for a long time; however, it is currently being implemented for the first time with the FHIR RU-core protocol to protect medical information systems. The goal of the research is enhanced qualified electronic signature integration for organizations developing secure software for medical information systems. To reach the goal, the following tasks are solved: previous works including foreign ones are analyzed and the table with different variants of FHIR protocol using is presented; the step-by-step plan of an enhanced qualified electronic signature integration has elaborated. A software code has been created to ensure the safe transmission of sensitive medical data to meet the challenge of implementing an enhanced qualified electronic signature. Russian standards were used to implement cryptographic protection of information in various medical information systems. To ensure secure data exchange, an enhanced qualified electronic signature was incorporated into the domestic version of the FHIR protocol. The use of Russian version of the protocol and certificates result in the correct exchange of medical documents. New functionality for medical information systems was standardized through the application of the Russian profile of FHIR protocol. Medical information systems deployed in the certified data processing centers are now using the FHIR RU-core protocol. The medical community easily uses FHIR RU-core, which is the most advanced tool for domestic medical systems. The method is aimed at integrating health information systems safely to develop regional services for doctors, patients, and digital health care organizers. The scientific novelty and relevance of the research lies in the field of adaptation international experience of using FHIR protocol under Russian circumstances and refinement of an enhanced qualified electronic signature integration method without capacity loss. The practical result demonstrates that the use of the Russian enhanced qualified electronic signature satisfies the information security requirements of new medical information systems and allows sensitive data to be transmitted without loss of quality and speed. It has been concluded that a systematic approach to using the Russian profile of the FHIR RU-core protocol for new medical information systems, with the aim of implementing digital healthcare, is highly recommended. This article is a valuable resource for medical information systems software architects and developers, as well as information security specialists.
321
This article is devoted to digital image processing algorithms, namely, super-resolution task. Currently, various methods of image restoration based on deep learning are actively developing. These methods are used to solve image restoration problems, such as inpainting, denoising, and super-resolution. One important class of super-resolution methods is reference-based super-resolution that allows restoring the missing information in the main image using reference images. Methods of this class are mainly represented by convolutional neural networks which are widely used in computer vision problems. Despite the significant achievements of existing methods, they have one significant drawback: the image area not represented in the reference image often has worse quality compared to the rest of the image, which is clearly visible to the observer. In addition to convolutional neural networks, diffusion models are actively used in image restoration problems. They are capable of generating images with high quality and diverse fine details but suffer from a lack of fidelity between the generated details and the real ones. The aim of this work is to improve the quality of the reference-based image restoration method using the diffusion model. A hybrid architecture of the diffusion model denoising neural network is proposed consisting of three main blocks: the basic denoising module, the reference-based module, and the fusion module for the final result generation. Three models were trained: a diffusion model, a reference based convolutional neural network, and a proposed hybrid model. All three models were trained and evaluated on the Large-Scale Multi-Reference Dataset dataset. Based on the results of the trained models testing, a qualitative (visual) and quantitative comparison of the three models was done. The hybrid model demonstrated higher image quality, clarity, and consistency compared to the convolutional neural network using references and better restoration of real details compared to the diffusion model. According to the quantitative evaluation, the hybrid model also showed higher results compared to pure models. The results of this work can be used to increase the resolution of any images using reference information.
328
The paper studies the problems of automatic verification of the behavior of a reactive system in accordance with formalized requirements, i.e., automatic verification. The reactive system is described by interconnected automaton objects. Formalized requirements are written as conditioned regular expressions. In this case mathematically reliable verification of the system is possible. The proposed solution is based on the use of the CIAO (Cooperative Interaction of Automaton Objects) language to specify the interaction of automaton objects. This paper considers the third version of the language, CIAO v.3, which defines the means of describing automaton classes, the means of instantiating automaton objects, and linking these objects to the system using a link diagram. The requirements to be verified are specified using the so-called conditioned regular expressions constructed over a set of elementary actions and conditions defined in the system. A software tool has been developed that, using a link diagram, builds a semantic graph, i.e., a directed graph, where all paths from the initial nodes represent the execution protocols of the automata-based program, thereby specifying the semantics of the reactive system. Then, it is checked whether the automata-based program complies with the requirement defined by the conditioned regular expression. If a discrepancy is detected, the tool shows the place where the semantic graph exactly does not comply with the requirement. Algorithms have been developed that allow automatic verification of reactive systems with respect to the formalized requirements of a certain class. An example of a program is given that demonstrates elevator control in the CIAO v.3 language, and the constructed reactive system is verified for compliance with formally specified requirements. The purpose of the article is to demonstrate software implementation of the tool for automatic verification of a program in the CIAO v.3 language.
339
Information retrieval using machine learning algorithms is based on transforming the original multimodal documents into vector representations. These vectors are then indexed, and the search is performed within this index. A popular method for indexing is vector clustering such as with k-nearest neighbors. We propose a clustering method based on an ensemble of Oblivious Decision Trees and introduce a vector search algorithm built on this method. The proposed clustering method is deterministic and supports parameter serialization for the ensemble. The essence of the method involves training an ensemble of binary or ternary Oblivious Trees. This ensemble is then used to compute a hash for each of the original vectors. Vectors with the same hash are considered to belong to the same cluster. For searching, several clusters are selected whose centroids are closest to the vector representation of the search query followed by a full search of the vector representations within the selected clusters. The proposed method demonstrates search quality comparable to widely used industry-standard vector search libraries, such as Faiss, Annoy, and HNSWlib. For datasets with an angular distance metric, the proposed search method achieves accuracy equal to or better than existing solutions. For datasets with a Euclidean distance metric, the search quality is on par with existing solutions. The developed method can be applied to improve search quality in the development of multimodal search systems. The ability to serialize enables clustering data on one computational node and transferring ensemble parameters to another, allowing the proposed algorithm to be utilized in distributed systems.
MODELING AND SIMULATION
345
The identification of parameters of linear electrical circuits refers to the problems of analysis and inverse problems of electrical engineering. Modern research in this field mainly boils down to determining the impulse response and/or the transfer function of an electrical circuit. The tasks of identifying the parameters of linear electrical circuits are usually limited to identifying two or three linear components connected in series and/or in parallel, or as T- and U-shaped four poles. In this case, the parameters are identified by examining the circuit in short-circuit and idle modes. There are also several particular solutions to the problem of identification at a given frequency. In this paper, we propose a solution to the problem of identifying the parameters of linear electrical circuits using polynomials expressing complex power and additional equations. To solve the problem of identifying the parameters of a passive linear bipolar, a method has been developed for synthesizing basic equations using the difference in complex power, calculated analytically and calculated as a result of instrument measurements and represented by an interpolation polynomial. A method for synthesizing additional equations based on the frequency derivatives of complex resistance and complex conductivity has been developed. The upper bound of the number of independent equations is estimated as the power of the set of degrees at a circular frequency, which is part of the polynomial under study. Estimating the largest number of independent equations will allow us to conclude that the problem is solvable using the basic equations, as well as the need to form additional equations. The solutions to the problem of identifying the parameters of linear passive electrical circuits are implemented numerically using computer algebra. The practical application of the developed methods is shown by the numeric example. As an example of the application of the proposed methods, the solution of the problem of determining the parameters of all elements of an electrical circuit is shown. The simulation is implemented in the Wolfram Mathematica software environment. The proposed solution, unlike the known approaches, allows us to determine the parameters of the components of linear passive electrical circuits (bipolar). For the first time, a method for synthesizing equations based on differences in complex power obtained as a result of analytical calculations and instrument measurements is proposed. The method of synthesizing additional equations by differentiating complex resistances and conductivities by frequency makes it possible to obtain relatively simple forms of equations. Such equations can be synthesized in advance for the most common typical schemes.
354
The quality of application of active means of information protection against leakage via vibroacoustic channel is traditionally assessed by the measured value of signal-to-noise ratio. In this case, measurements are taken at several control points to check the security, while the sample size is relatively small, and the measurement process is labor intensive. Based on the studies previously conducted by the authors of the article and modeling of acoustic wave propagation in the protected room, this paper proposes to calculate speech intelligibility at each point of the external enclosing surface along with the measured values of the signal-to-noise ratio on the surfaces of enclosing structures. A conclusion on the security of the room from information leakage via vibroacoustic channel is made by correlating the obtained values with the scale for assessing the quality of the intercepted speech message. The proposed method for assessing the security of a room is based on computer modeling of sound propagation. The interaction of acoustic air vibrations with a wall as one of the basic enclosing elements of the room is considered. The proposed approach is also applicable to the analysis of other structural elements of the room. The finite element model of the wall is implemented in the ANSYS program. The model provides for fixing a vibroacoustic emitter on the wall surface on one side of the wall, and a sound source at some distance from this surface. At each point on the opposite side of the wall surface, the signal-to-noise ratio is calculated. The signal level corresponds to the values of wall surface vibrations under the action of the sound source, and the noise level corresponds to the values of wall surface vibrations under the action of a vibroemitter attached to the wall from the inside. Based on experimental modeling, an assessment is made of the obtained values of speech intelligibility on the entire wall surface by comparing them with the scale for assessing the quality of intercepted speech messages adopted in security practice. The method proposed in the paper allows one to determine the least protected places in the studied premises by compiling maps of speech intelligibility distribution on the surfaces of walls and other enclosing structures. The study can find application in creating a system for protecting information from leakage through a vibroacoustic channel. Checking the security of enclosing structures using the proposed computer modeling method allows choosing a rational arrangement of protective equipment, their quantity, and also clarifying the security of premises assessed using traditional methods.
BRIEF PAPERS
362
The article presents results of the development of a software and hardware configurable platform designed to create complex cyber-physical systems, for example, such as multi-operational technological and laboratory robotic installations involving parallel execution of applied tasks with real-time constraints. The developed architecture of the platform allows combining system and application computing processes with a mixed nature of real-time requirements for critical functions. As part of the creation of the platform, we proposed specialized models of the computing process organization with the necessary tools and development languages, methods for creating platform components, and architectural design patterns for target systems based on the platform. Plans for further development and use of the platform are presented.
366
A design for a device to move a sensitive element (compression tube, Faraday cylinder or other sensor) relative to an atomic or ionic beam to measure its intensity profile is described. The device is a two-axis manipulator with mechanical adjustment, built on the principle of vacuum motion input with a deformable sealing element (a metal bellows). The manipulator-based measurement system has successfully applied to measure the intensity profiles of beams from the atomic and ionic units of the POLIS polarized ion source. The measurements were conducted within the PolFusion experiment which is aimed at studying double-polarized nuclear fusion and is being carried out at the National Research Center “Kurchatov Institute” — PNPI.