Summaries of the Issue


In modern large aperture optical systems for tracking astronomical objects, the technology of bistatic schemes is widely introduced, in which the laser, the main and auxiliary telescopes are spatially separated. This requires additional measures to be taken when aligning the optical axes of telescopes, especially when tracking LEO space objects. The choice of bistatic schemes in astronomical telescopes is due to the problem of “tilt uncertainty” inherent in monostatic schemes for the formation of a laser reference star. This problem is caused by the difficulty or even the impossibility of determining the tilt of the wavefront when the laser guide star jitters in the image plane. The article discusses a monostatic scheme for constructing ground-based adaptive optoelectronic systems. The monostatic scheme combines the optical axis of the laser, which forms the laser guiding star, and the optical axis of the telescope, which serves to obtain images of space objects by eliminating phase disturbances of the atmosphere due to the radiation of the laser guiding star. The proposed method for determining the tilt of the wavefront in a monostatic scheme is based on the analysis of expressions for the dispersion of the tilt jitter of the images of a laser reference star and a space object for the case when the diameter of the receiving aperture of the telescope is much larger than the diameter of the aperture of the laser forming the laser reference star. This approach is based on the long-elicited strong correlation between the instantaneous values of the tilt of the laser beam and the received beam from a natural star, transmitted towards each other. When observing low-orbit small-sized space objects and laser reference stars, it is assumed that they are in the Fresnel zone of the receiving aperture of the optoelectronic system and within the isoplanatism angle of the atmosphere, determined within the framework of an isotropic and locally homogeneous model of atmospheric turbulence. The proposed solution made it possible to determine the value of the instantaneous tilt angle for the image of an inconspicuous space object in the focal plane of the receiving aperture of the telescope on the basis of measuring the instantaneous tilt angle for the actually observed image of the laser reference star. The results can be used in the development of ground-based adaptive optoelectronic tracking systems for low-orbit small-sized space objects.


DREM procedure application for piecewise constant parameters identification
Anton I. Glushchenko, Vladislav A. Petrov, Konstantin A. Lastochkin
The research focuses on the applicability of the Dynamic Regressor Extension and Mixing (DREM) procedure for the identification of the piecewise constant parameters of a linear regression. Unlike the known papers, it is shown that the application of the baseline DREM procedure to the identification of piecewise constant parameters generates the scalar perturbed regressions at some time intervals, which significantly deteriorates the quality of the unknown parameters estimates. The methods of the research imply integral and differential calculus and mathematical modeling. To solve the revealed problem, the authors propose a new method of dynamic regressor extension, which is based on the interval integral filtering with exponential forgetting and resetting. The study describes the modified DREM procedure, which, unlike the baseline one, allows one to generate the scalar regressions with an adjustable level of perturbation. Numerical experiments to identify piecewise constant parameters yielded the following results: the correctness of the obtained perturbed regression description and the presence of overshoot of such perturbed regression parameter estimates using the gradient method and FCT-D (Finite Convergence Time DREM). It is also shown that the values of such overshoot can be adjusted if the proposed modified DREM procedure is applied. The proposed procedure can be applied to the development of identification and adaptive control systems.


The paper presents the results of an experimental investigation of the morphology of copper and silver thin films synthesized by the substitution reaction method. Silver films were synthesized by immersing polished substrates of copper (M1 brand) into silver nitrate solution. Copper films were synthesized by immersing substrates of iron (electrolytic iron brand) and also of iron with vacuum deposited tin film of 5 μm thick into copper vitriol solution. The morphology of synthesized films was analyzed by a scanning electron microscope. The research has shown that the metal films with the thickness of around 1 μm are formed 2 seconds after the reaction start point. The films consist of crystal micro- and nanodendrites. The silver films also contain crystalline plates of silver oxide with characteristic size up to 2 μm. With an increase of reaction time the metal layers are compacted. And minimal pore sizes in this case are 20 nm. The synthesized films can be used for the creation of semiconductor-metal micro- and nanostructures for photocatalytic water splitting. Such films can be also applied in chemical sensors and biosensors for surface enhancement of Raman scattering.


Complex huge-scale scientific applications are simplified by workflow to execute in the cloud environment. The cloud is an emerging concept that effectively executes workflows, but it has a range of issues that must be addressed for it to progress. Workflow scheduling using a nature-inspired metaheuristic algorithm is a recent central theme in the cloud computing paradigm. It is an NP-complete problem that fascinates researchers to explore the optimum solution using swarm intelligence. This is a wide area where researchers work for a long time to find an optimum solution but due to the lack of actual research direction, their objectives become faint. Our systematic and extensive analysis of scheduling approaches involves recently high-cited metaheuristic algorithms like Genetic Algorithms (GA), Whale Search Algorithm (WSA), Ant Colony Optimization (ACO), Bat Algorithm, Artificial Bee Colony (ABC), Cuckoo Algorithm, Firefly Algorithm and Particle Swarm Optimization (PSO). Based on various parameters, we do not only classify them but also furnish a comprehensive striking comparison among them with the hope that our efforts will assist recent researchers to select an appropriate technique for further undiscovered issues. We also draw the attention of present researchers towards some open issues to dig out unexplored areas like energy consumption, reliability and security for considering them as future research work.
The paper considers the problem of an effective microservices interaction and its organization to support data consistency in fault tolerant and high load systems. The “Saga” microservices orchestration template was used for microservices management. The authors assessed the expediency of asynchronous programming principles usage for designing the Saga coordinator. The simulation of processes in the Saga coordinator was conducted; it considers different specifics of asynchronous and synchronous configurations of distributed transactions (Sagas) management. Synchronous configuration group includes a coordinator with a fixed pool of threads and a coordinator that generates a new thread for each new Saga. Asynchronous configuration group consists of a coroutines based coordinator and a coordinator that uses Linux core scheduler. The set of simulations with different numbers of Sagas and available coordinator processors was executed. It was shown that the use of asynchronous approaches significantly reduces the Saga’s execution duration up to 9.74 times and improves the processor time utilisation value up to 88 %. The obtained data proves the efficiency of asynchronous programming principles applied to the design of the Saga coordinator. The difference in efficiency between the asynchronous algorithms that were implemented in this paper was insignificant. Asynchronous programming principles used to build the Saga coordinator allow it to handle bigger load and to use processor resources in a more efficient way. The outcome of this research can be applied during the design of fault tolerant and high load systems. The paper might be interesting to IT-specialists and researchers focusing on distributed computing.
The paper deals with the issues of detection and modeling of human faces and objects on a face taken from the images. The model, algorithm and program are developed for the detection of human facial contours and main elements. The preliminary image processing involves the methods of color modeling and color measurements. Well-known methods, including hidden Markov models, are used for image recognition and processing. The training of the developed model was carried out with neural network methods of machine learning based on a specially created sample, as well as using color segmentation methods. A factor model of a human face is created, which makes it possible to select and recognize efficiently a face and its objects in the image at high speed and with a given accuracy. The experiments have shown that the accuracy of the correct selection of boundaries was about 95–96 % after training. The developed model can be used in security assurance tasks, namely to search and identify criminals, to strengthen law and order, to control access to critical infrastructure facilities, etc.
In-depth studies of the topological properties of information and telecommunication networks contribute to the understanding of their functional capabilities, including stability. The study of the stability of complex networks to failures in operation when their components fail is based on modeling by sequentially removing nodes or edges of the network (percolation). The paper presents a comparative analysis of sequential and stochastic variants of percolation of network nodes and statistical estimates of the complex two-criterion network stability coefficient. During the study, methods for calculating the average path length based on graph theory were used. In the statistical analysis of the network stability, we applied the analysis of variance and pairwise comparisons according to the Tukey criterion, based on the provisions of the theory of mathematical statistics. The simulation is performed using the Barabashi–Albert and Erdős–Rényi random graph models. The difference between the method of stochastic percolation and sequential percolation is shown. The performed statistical analysis proved the influence of the factor changing the structure of networks on their stability due to stochastic percolation. The dynamics of network stability reduction under stochastic percolation for different types of networks is shown. It is revealed that in some cases, for example, in networks with high density, the stochastic percolation method is the most preferable one. The study shows the possible options for assessing the stability of networks without a priori knowledge about the type of connections between nodes and with a priori knowledge about the type of connections between nodes. In the former case, knowing the number of network nodes, one can calculate the limit values of stability, in the same way as if the nodes were deleted accidentally. The latter option can be used to calculate the stability of networks that are subject to random node failures, for example, when diagnosing technical systems.
The study considers the problem of context-free path querying with all-path query semantics. This problem consists in finding all paths of the graph, the labels on the edges of which form words from the language generated by the input context-free grammar. There are two approaches to evaluate context-free path queries using linear algebra operations: matrix multiplication-based and the Kronecker product-based. But until now, there is no algorithm using the matrix multiplication capable of handling context-free path queries with the most complex all-path query semantics, in which the all paths that match the query must be provided. The paper proposes the algorithm for context-free path query evaluation using the matrix multiplication, which is capable of processing queries with the all-path query semantics. In the adjacency matrix of the input graph for each pair of vertices, we store additional information about the paths found between these vertices in the form of a set of possible intermediate vertices. At the first stage, a set of matrices is constructed that store such information about all paths that satisfy the input query. At the second stage, all queried paths are restored from the constructed set of matrices. The proposed algorithm was implemented in C++ and a comparison was made with other most efficient algorithms for evaluating context-free path queries, namely with the matrix-based algorithm that allows us to find only one such path, and with the Kronecker product-based algorithm that allows us to find all such paths in the graph. The results of the experimental study showed that the proposed algorithm is significantly more efficient in restoring the queried paths, but in some cases it consumes a significantly larger amount of memory than the algorithm based on the Kronecker product. The described algorithm can be applied in static code analysis, bioinformatics, network analysis, as well as in graph databases, when it is required to find all possible dependencies in the data presented in the form of a labeled graph.
The authors propose the method that is intended for automating patient treatment technological process in a proton therapy center and based on a decision support system. The developed information system provides the implementation of the proton therapy protocols. The technological process of the proton therapy is strictly regulated. It includes not only the process of therapy itself (irradiation of the patient), but also the preliminary stages of treatment, as well as the stages following the irradiation process. Proton therapy centers use information systems, such as radiological and hospital information systems (oncology information systems), that provide information support for separate stages. At the moment, there is no system that controls the entire technological process in a proton therapy center. The proposed system ensures that the operator complies with the protocol of the treatment’s stage and supports the entire technological process of the proton therapy. The system sets the schedule of stages, order and possibility of their parallel implementation, determines the conditions for allowing the start of the stage, provides the subsystems with the protocol of the stage and necessary data of protocol to manage the stage by the operator, and also records the results of the completed stage. The system is built on the “client-server” technology. The server has a software part (that implements the concept the server) and a database. For each operator workstation that manages the stage, the client software of the system is additionally installed. The paper describes an approach to creating a decision support system for the personnel of a proton therapy center. Such a system provides the correct implementation and sequence of the proton therapy stages. The system has been included in the project of the Research Proton Therapy Center for Ophthalmic Oncology at Saint Petersburg Nuclear Physics Institute of National Research Center “Kurchatov Institute”. The necessary information is transmitted from the system to the proton therapy center equipment via radiological and hospital information systems using standard protocols. It allows the solution to serve as a part of any automated control system of a radiation therapy center (not only for proton therapy).
Monitoring the driver’s behavior in the cabin of a vehicle is an urgent task for modern automated driver support systems (Advanced Driver Assistance Systems), which belong to the class of active safety systems. Existing research and solutions in the field of modern driver assistance systems are more focused on the use of electronic devices in the form of video cameras, lasers, and radars that provide measurement information about the driver in the cabin. However, the use of wearable electronic devices that measure the heart rate, electrocardiogram, user movements, and other indicators, allows one to obtain results about the driver’s dangerous behavior more accurately and reliably. The paper proposes an approach to detecting dangerous states in the driver’s behavior in the cabin of a vehicle based on the use of information from wearable electronic devices. The study shows that it is sufficient to use heart rate measurements passed from wearable electronic devices to detect dangerous states, such as aggression and stress. The developed mobile application on the Android platform allows detecting signs of aggression and stress in the driver’s behavior using data obtained from sensors of wearable electronic devices. When the driver shows dangerous behavior in the cabin, the mobile application warns the driver by vibrating the wearable electronic device and an audio signal played by the smartphone. The developed approach is tested on a data set collected in real driving conditions on public roads in the city and on country roads in various driving conditions. Detection of signs of aggression and stress in the driver’s behavior allows one to supplement information about the driver, and thereby improve the effectiveness of driver monitoring systems in the cabin of the vehicle, aimed at preventing and reducing the risk of road accidents and improving the skills of road users. The proposed approach can be used in combination with other technologies for monitoring driver behavior when building an intelligent driver support system.
Automata-based programming considers program systems construction as finite state machines that demonstrate state-based behavior. This paper analyzes approaches to data structures and their realization in different programming paradigms. The requirements for automata style implementations are estimated for actual tasks. It is shown that automata-based algorithms need approaches beyond the standard object-oriented inheritance and polymorphism. The Liskov substitution principle is considered as an implementation base instead them. Data-oriented programming approach and in particular data and code separation form the backbone of the engine. The work describes the automata data structure and code-data interaction. The dynamically loaded modules and representations of data, code and schemes provide the main building blocks. Automata-based programming engine conception is introduced to clue all above. This engine supports distributed systems referencing. In order to implement an automata-based programming engine, the pilot project has to meet a set of requirements, including modular programming support, extended metadata availability and code-free read-only data access. Oberon/Component Pascal programming language is therefore chosen, along with a BlackBox Component Builder graphical environment. Automata-based programming engine prototype is implemented as Abpe subsystem for BlackBox. Several example automata-based modules demonstrate functional interacting programs.
The paper investigates the homoscedastic aleatoric uncertainty modeling for the detection of pollen in images. The new uncertainty modeling loss functions are presented, which are based on the focal and smooth L1 losses. The focal and smooth L1 losses proved their efficiency for the problem of image detection, however, they do not allow modeling the aleatoric uncertainty, while the proposed functions do, leading to more accurate solutions. The functions are based on Bayesian inference and allow for effortless use in existing neural network detectors based on the RetinaNet architecture. The advantages of the loss functions are described on the problem of pollen detection in images. The new loss functions increased the accuracy of pollen image detection, namely localization and classification, on average by 2.76 %, which is crucial for the pollen recognition in general. This helps to automate the process of determining allergenic pollen in the air and reduce the time to inform patients with pollinosis to prevent allergy symptoms. The obtained result shows that the modeling of homoscedastic aleatoric uncertainty for neural networks allows separating the noise from the data, increasing the accuracy of the proposed solutions. The developed functions can be applied to train neural network detectors on any other image datasets.
The speech synthesis detection algorithm based on cepstral coefficients and convolutional neural network
Roman A. Murtazin, Kouznetsov Alexander Yu., Evgeny A. Fedorov, Ilnur M. Garipov, Anna V. Kholodenina, Yulia B. Baldanova, Vorobeva Alisa A.
The existing approaches to detecting synthesized speech, based on the current issues of synthesizing voice sequences, are considered. The stages of the algorithm for detecting spoofing attacks on voice biometric systems are described, and its final workflow is presented. The research focuses mainly on detecting synthesized speech, as it is the most dangerous type of attacks. The authors designed a software application for an experimental study, present its structure and propose the detection synthesized speech algorithm. This algorithm uses mel-frequency and constant Q cepstral coefficients to extract speech features. A Gaussian mixture model is used to construct a user model. Convolutional neural network was chosen as a classifier to determine the voice’s authenticity. Two basic methods for combating spoofing attacks, proposed by the authors of the ASVspoof2019 competition, were selected for making comparisons. One of these methods involved using linear frequency cepstral coefficients as speech features, while the other method used constant Q. Both solutions used Gaussian mixture models for classification. To evaluate the effectiveness of the proposed solution and compare it with other methods, a voice database was created. The selected EER and minDCF metrics were applied. The experimental results demonstrated the advantages of the proposed algorithm in comparison with the other algorithms. An advantage of the proposed solution is that it uses extracted speech features that perform efficiently when it comes to user identification. This makes it possible to use the algorithm to optimize a voice biometric system that has embedded protection against spoofing attacks that is built on speech synthesis. In addition, it is possible to use the proposed method for voice identification with minimal modifications required. Voice biometric identification systems have excellent opportunities in the banking sector. Such systems allow banks to simplify and accelerate the process of financial transactions and provide their users with advanced banking functions remotely. The implementation of voice biometric systems is difficult by their vulnerability to spoofing attacks, particularly to those conducted by means of speech synthesis. The proposed solution can be integrated into voice biometric systems to improve their security.
Risk assessment methodology for information systems, based on the user behavior and IT-security incidents analysis
Bezzateev Sergey V, Tatyana N. Elina , Vladimir A. Myl’nikov , Ilya I. Livshitz
Obtaining trustworthy estimates for the reliability and security of corporate information systems is an urgent problem. It is not enough just to have estimations for security of software and hardware components. Constant monitoring of a user’s actions and a comprehensive analysis of his (her) behavior in the system are necessary. The novelty of the proposed approach consists in application of psychological profiling methods, models of neuro-fuzzy inference and mechanisms of multidimensional data analysis. Vulnerabilities of computer information systems are determined on the basis of a retrospective analysis of information security incidents. The user’s profile is based on the analysis of his (her) behavior. The patterns of this behavior in a particular computer information system are determined. The work studies the influence of intentional and unintentional user behavior on the probability of information security threats and identifies the threshold values of the number and frequency of the events indicating an information security incident. Such data helped to build a model to search for an intruder during an information security incident. The proposed method was tested in the MatLab software package. The experimental calculations of potential vulnerabilities were performed in the “1C: Enterprise 8.3” system of programs. As the initial data for the calculation, we used the log entries of the actions of more than 100 users with different roles for a period of one year. It is noted that the risk management policy should include a continuous analysis of user actions, as well as the consequences of these actions, in order to identify the goals of such behavior and prevent information security incidents. It is shown that when implementing the proposed methodology, it is necessary to constantly identify users who should not have access to sensitive information from the inside, assuming that a current violator is located within the boundaries of a computer information network. The application of the proposed methodology allows us to increase the level of information security with a constant change in the “working environment” of the information system. It will help to significantly simplify the process of making an objective and reasonable management decision about the most likely implementation of information security incidents. This allows one to take appropriate preventive measures in advance.
Identification of user accounts by image comparison: the pHash-based approach
Valerii D. Oliseenko, Maxim V. Abramov, Tulupyev Aleksander L
The study presents a new approach to the identification of various online social networks’ users that allows for matching of accounts belonging to the same person. To achieve this goal, images extracted from digital footprints of users are used. The proposed new approach compares not only the main images of a user’s profile, but also all the elements of the graphic content published in a user’s account. The described approach requires a pairwise comparison of the images published by users in two accounts from different online social networks on the “all-to-all” principle to assess the probability that these accounts belong to the same user. The comparison of the labeled graphical content elements is performed using the well-known perceptual hash method called pHash. A computational experiment was conducted to evaluate the results obtained by using the proposed approach, the f1-score achieved 0.886 for three matched images. It is shown that the results of the pHash image comparison can be used for account identification as a standalone approach as well as to complement other identification approaches. The proposed algorithm can be used to supplement the existing methods for comparative analysis of accounts. Automation of the proposed approach provides a tool for aggregation and makes it possible to obtain more information about users, assessing the depth of their personality features. The results can be applied to forming a digital twin of the user for further description of his (or her) traits in the tasks of protection against social engineering attacks, targeted advertising, assessment of creditworthiness, and other studies related to online social networks and social sciences.
A study of human motion in computer vision systems based on a skeletal model
Sophia A. Kazakova, Polina A. Leonteva, Maria I. Frolova, Donetskaya Julia V. , Popov Ilya Yu. , Kouznetsov Alexander Yu.
Methods of studying human motion in computer vision systems can be divided into two types. These are analysis in two-dimensional and three-dimensional space. The former uses a single camera image and/ or multiple body sensors. Such an approach leads to a rapid accumulation of error and, consequently, low accuracy of the figure representation. Multiple cameras are usually used in the case of three-dimensional space analysis, while the objects are represented as sets of volumetric elements. Despite the high accuracy of this method, it is associated with high computational complexity and internal network load. The purpose of the paper is to develop a model using a single camera, while approaching three-dimensional space analysis methods in terms of accuracy. In this paper a human figure is represented as a skeleton. The skeleton is described by an acyclic connected graph. The general structure of a human figure is analyzed. Fifteen basic points are selected. Physical and logical connections between them were studied and mathematically described. The velocity and spatial characteristics of the points and connections outline the general dynamics of motion. The study describes a model of human motion and gives the option for model construction on the example of a particular image. The developed algorithm for collection and analysis of information estimates relative locations and velocity characteristics of the graph elements. The model can be used for acquisition of information about the reference dynamics of human movements. In case of detecting major differences between the reference and the reality, the behavior is defined as deviant. Thus, the obtained algorithm can be applied in computer vision systems for detection and analysis of human movements.


Solution of super- and hypersonic gas dynamic problems with a model of high-temperature air
Konstantin N. Volkov , Yuriy V. Dobrov , Anton G. Karpenko, Mikhail S. Yakovchuk
The study considers the solution of a number of problems of supersonic and hypersonic gas dynamics using a model that takes into account the dissociation and ionization of air. The results of verification and validation of the developed numerical method using various difference schemes (the Roe scheme, Rusanov scheme, AUSM scheme) for discretizing convective flows are presented. The formulation of the mathematical model for high-temperature air uses the presence of equilibrium chemical reactions of dissociation and ionization. For this purpose, at high incoming flow velocities, the Kraiko model is applied, which includes equilibrium chemical reactions in air at high temperatures. To discretize the basic equations, the finite volume method on an unstructured grid is applied. One of the features of the constructed mathematical model is the implementation of the transition between physical and conservative variables. Relationships are given, with the help of which the transition from conservative variables to physical ones and vice versa is carried out when using the high-temperature air model. To ensure the stability of numerical calculations, an entropy correction is introduced. The decrease in entropy in the solution of hyperbolic equations is excluded by introducing an artificial viscosity according to Neumann, as well as by using the Godunov method with an exact solution of the Riemann problem and methods based on the approximate solution of the problem of the decay of an arbitrary discontinuity. A number of problems of supersonic gas dynamics (supersonic flow in a channel with a straight step and supersonic flow around a sphere) are numerically solved taking into account high-temperature effects. The criteria for the accuracy of numerical calculations related to the location of shock-wave structures are discussed. The calculated shock-wave structure of the flow is compared with the data available in the literature, as well as with calculations using the perfect gas model. Some results of numerical calculations are compared with the available experimental data. The shock-wave flow patterns obtained in the framework of the inviscid model, which takes into account the effect of viscosity and its dependence on temperature, and the turbulent flow model are compared. On the basis of numerical simulation data, the influence of viscous effects on the flow characteristics in a channel with a straight step and hypersonic flow around a sphere is considered. The influence of various numerical factors on the shape of the bow shock and the presence of fluctuations in the solution behind the shock is emphasized. As part of the work, a computational module was prepared for the commercial package Ansys Fluent, implemented with the help of user programming tools. The prepared module expands the standard capabilities of commercial software focused on solving computational gas dynamics problems, and is available to Ansys Fluent users for solving hypersonic aerodynamics problems. The developed means of numerical simulation can be useful in the design and optimization of hypersonic aircraft.
Modeling security violation processes in machine learning systems
Maxim A. Chekmarev, Stanislav G. Klyuev, Viktor V. Shadskiy
The widespread use of machine learning, including at critical information infrastructure facilities, entails risks of security threats in the absence of reliable means of protection. The article views the processes in machine learning systems as the ones occurring in information systems susceptible to malicious influences. The results of modeling events leading to a security breach in machine learning systems operating at critical information infrastructure facilities are presented. For modeling, the technology of creating functional models SADT (Structured Analysis and Design Technique) and the IDEF0 (Integration definition for function modeling) methodology were used as a tool for transition from a verbal functional description of the process under study to a description in terms of mathematical representation. In order to study the scenarios of the transition of machine learning systems to a dangerous state and the numerical assessment of the probability of security violation, mathematical modeling of threats was carried out using the logical-probabilistic method. The authors obtained a visual functional model of system security violation in the form of a context diagram of the system and two levels of decomposition. The hazard function of the system is determined and the arithmetic polynomial of the probability function is derived. In further work the described models will allow researchers to develop methods and algorithms for protecting machine learning systems from malicious influences, as well as to apply them in assessing the level of security.
Mathematical modeling of an optimal oncotherapy for malignant tumors.
Igor A. Narkevich, Ekaterina V. Milovanovich, Slita Olga V., Tertychny-Dauri Vladimir Yu.
The paper presents a mathematical model of the optimal treatment for malignant neoplasms. The neoplasm is considered as a distributed parameter object. The scheme for an optimal oncotherapy using a system of partial differential equations of parabolic type is analyzed. The authors propose a solution to the problem using Bellman optimization and the method of adjustable parameters. The optimal control law of the oncotherapy mode is derived. The main results include a scheme for the formation of the Bellman optimal strategy for regulation of control parameters and dynamic parameters, under which the target conditions are guaranteed over time. The work describes an optimization criterion that reflects the total costs of the control system for the oncological treatment. Simulation results demonstrate the efficiency of the optimal control of treatment process. The results of this work can be used in modern clinical practice at the stage of predictive selection of the most effective treatment strategy.
 The article deals with the study of new phenomena that accompany the free or wall-bounded expansion of a nonequilibrium in terms of velocities and temperatures mixture of gas and particles of various sizes with axial symmetry. The dynamics of the gas suspension is considered in the multifluid model of a calorically perfect inviscid gas and incompressible monodisperse spherical particles. The Eulerian approach is used to describe the motion of each phase of the mixture. For numerical simulation, a hybrid large-particle method of the second order of approximation in space and time with nonlinear correction of artificial viscosity, an additive combination of fluxes, and a semi-implicit scheme for calculating interfacial friction and heat transfer are implemented. The efficiency and accuracy of the method for a two-dimensional problem with axial symmetry are confirmed by comparing the solutions obtained in a one-dimensional formulation in a cylindrical coordinate system. In the case of small particles (with a diameter of d = 0.1 µm), the relaxation time of the phases is much less than the characteristic time of the problem and the gas and particles mixture behaves as a homogeneous medium similar to the gas flow. For sufficiently large particles (d = 20 µm), the effects of the difference in the inertia of the phases and nonequilibrium, associated with the mismatch of the velocities and temperatures of the gas and particles, are manifested. These effects cause the splitting of the initial interface of the media into a contact discontinuity in the gaseous phase and the surface between the suspension and the pure gas (a jump in porosity). At subsequent moments, the flow pattern changes to the opposite, which is explained by the deceleration of the gas, the appearance of a reverse flow to the center of expansion due to rarefaction in the vicinity of the axis of symmetry, and the formation of a secondary shock wave. Then there are fluctuations with a change in the relative position of the phase boundaries. In this case, fractures of the gas contact trajectories (the break of the first derivative) are observed, which are associated with the passage of the shock wave reflected from the wall and the plane of symmetry. Over time, due to baroclinic instability (the mismatch of density and pressure gradients), vortex structures begin to appear at the interface boundaries. In addition, in the case of expansion of the gas suspension in a closed volume, a complex shock-wave structure is formed, due to multiple reflections of shock waves from the walls and their interaction with the contact surfaces. The practical significance of the results obtained is to identify the fundamental physical effects that should be taken into account when setting and solving problems in chemical technologies, pneumatic transport, and other areas. In addition, the numerical solutions can be useful in testing the resolution of other difference schemes for reproducing the vortex instability of contact boundaries and shock wave structures in the flows of relaxing gas suspensions.


The study of the birefrigence modulator based on lithium niobate
Ida L. Kublanova, Vladimir A. Shulepov , Aksarin Stanislav Mikhailovich, Kulikov Andrey Vladimirovich, Vladimir E. Strigalev
This work investigates a lithium niobate X-cut birefringence modulator with a titanium diffusion channel waveguide. The wave voltage of the modulator depends on the growth and processing conditions of the lithium niobate crystal and on the technology of waveguide formation. The tolerance for determining the length of the electrodes and the gap between them exceeds 1 %. In this regard, the calculated values of the wave voltage can differ significantly, and an experimental measurement of the wave voltage is required for practical use. The authors present an experimental refinement for the value of the wave voltage of the modulator and perform a comparison of the value with the theoretical one. In the experiment, the wave voltage was determined using a scanning Michelson interferometer. It is shown that the experimentally measured value of the wave voltage diverges from the calculated one by more than 26 %. This difference is based on the assumption that the vector of the electric field inside the crystal is directed perpendicular to the axis of propagation of optical radiation, and the magnitude of the electric field does not change over the depth of the crystal. In this case, the overlap integrals of the ordinary and extraordinary waves are equal. In real modulators with a channel waveguide formed by titanium diffusion technology, these assumptions are not fulfilled. The refractive index of lithium niobate and the electro-optical coefficient may vary for different crystal samples, depending on the conditions of their growth, processing and waveguide formation technology. The results of the work can find application in the field of interferometric measuring devices, in which a birefringence modulator is used, since the value of the wave voltage is necessary for the design of control electronics.
Copyright 2001-2023 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.