Summaries of the Issue


A study of a silicone film deposited on quartz glass under laser radiation 
Belikov Andrey V., Klochkov Ivan S. , Alekseev Ivan V. , Kapralov Sergey A.
The paper studies the structure, optical and operational properties of a silicone film deposited on the surface of quartz glass as a result of the action of laser radiation on volatile substances released from a silicone rubber sample in a closed volume. The research was carried out within the framework of the laser multiparameter method and using an original setup, which includes a solid-state neodymium laser with a wavelength of 1064 nm, and laser pulse parameters: energy of 105 mJ, a duration of 11–14 ns, a repetition rate of 10 Hz. A sealed test-cuvette is placed at the output of the laser. A silicone rubber sample is placed inside the test-cuvette. When laser radiation passes through the test-cuvette, volatile substances that are released from the sample over time interact with the laser radiation and create deposition zones on the optical elements of the test-cuvette, which affect the optical characteristics of these optical elements. The topology of the deposition zones was studied using a profilometer. The structural composition of the original silicone rubber and the deposition zone was determined using a scanning electron microscope. The main results show the dependences of the coefficients of the area and attenuation of the deposition zone on the temperature and the number of laser pulses. The elemental composition, color, resistance to the action of the solvent, and the thickness of the deposition zones have been investigated. It was found that with an increase in the temperature and the number of laser pulses, the area and attenuation coefficients of the deposition zones increase, the color does not change, and the resistance to the action of the solvent increases. With an increase in temperature, the deposition zone, initially consisting of micro-fragments, becomes continuous, and with an increase in the number of laser pulses, its thickness increases. The thickness of the deposition zone is unevenly distributed along its diameter. The results obtained can be applied in the development of silicone-containing biochips for health diagnostics and therapy.
Optical composites based on organic polymers and semiconductor pigments
Valery M. Volynkin, Sergei K. Evstropiev, Bulyga Dmitry V. , Artyom V. Morkovsky, Stanislav S. Pashin , Dukelskiy Konstantin V, Bourdine Anton V., Igor B. Bondarenko
The aim of the work was the development of optical organic-inorganic composite materials with high absorption of light the visible part of the spectrum and high reflection in the near infrared region of the spectrum. Such materials are used in industry and construction as coatings. To create these optical composites, epoxy and epoxy-polyurethane polymer matrices containing inorganic semiconducting particles (CuS, PbS, Fe3O4) were used. Highly dispersive powders of inorganic pigments were used for the preparation of homogeneous composite materials. The wet precipitation method with the application of organic stabilizing additions was applied for the preparation of dispersive CuS and PbS powders. Optical microscopy and X-ray diffraction analysis helped to study the crystal structure and morphology of the obtained semiconductor pigments. PMT-3 device was applied for microhardness measurements of the prepared composite materials. Based on the data of X-ray diffraction analysis, the average crystallite size was calculated using the Scherrer formula. It was found that freshly precipitated CuS and PbS powders consist of nanocrystals with a size of 11–20 nm. Optical microscopy data indicate the formation of aggregates of semiconductor nanocrystals in powders. Experiments have shown that all synthesized composites have low light reflection coefficient (less than 0.06) in the visible part of the spectrum and an increased light reflection coefficient in the near infrared region of the spectrum (0.13–0.15 and more). The results of the study showed that the use of epoxy-polyurethane polymer matrices provides greater microhardness of composite materials, compared to the composites based on epoxy polymers. The highest microhardness values were observed in composite materials based on epoxy-polyurethane polymers containing highly dispersed Fe3O4 particles. Obtained organic-inorganic composites could be used as materials for light-absorbing coatings in different industrial applications.


A new algorithm for the identification of sinusoidal signal frequency with constant parameters
Nguyen Huc Tung, Vlasov Sergey M., Pyrkin Anton Alexandrovich, Ivan V. Popkov
The paper presents a solution for identifying the frequency of a sinusoidal signal with constant parameters. The issue can be relevant for compensation of disturbances, control of dynamic objects, and other tasks. The authors propose a method to improve the quality of the estimation of the sinusoidal signal frequency and to ensure exponential convergence to zero of the estimation errors. At the first stage, the sinusoidal signal is presented as an output signal of a linear generator of finite dimension. The signal parameters (amplitude, phase, and frequency) are unknown. At the second stage, the Jordan form of the matrix and the delay operator are applied to parameterize the sinusoidal signal. After a series of special transformations, the simplest equation is obtained containing product of one frequency-dependent unknown parameter and a known function of time. To find the unknown parameter, the authors used the methods of gradient descent and least squares. A new algorithm for the parametrization of a sinusoidal signal is presented. The solution is based on transforming the signal model to a linear regression equation. The problem is solved using gradient descent and least squares tuning methods based on a linear regression equation obtained by parametrizing a sinusoidal signal. The results involve the analysis of the capabilities of the proposed estimation method using computer modeling in the Matlab environment (Simulink). The results confirmed the convergence of the frequency estimation errors to the true values. The developed method can be effectively applied to a wide class of tasks related to compensating or suppressing disturbances described by sinusoidal or multisinusoidal signals, for example, to control a surface vessel with compensation of sinusoidal disturbances.


A study of silicon p-n structures with mono and multifacial photosensitive surfaces 
Avazbek Mirzaalimov, Jasurbek Gulomov, Rayimjon Aliev, Navruzbek Mirzaalimov, Suhrob Aliev
Increase in the efficiency and reduction of silicon consumption in production of solar cells are relevant problems. Designing two and three facial solar cells can be seen as a solution for such tasks. Compared to usual SC, the output power of two and three facial solar cells exceeds by 1.72 times by 2.81 times, respectively. Illumination of solar cells with high intensity light makes the temperature of its heating an important characteristic. Therefore, the paper investigates the influence of temperature on properties of multifacial solar cells. We defined the nature of change of temperature coefficients for the main photovoltaic parameters that are inherent to silicon solar cells under various (one, two and three facial) conditions of lighting. Temperature coefficients of three facial solar cells are 2.52·10–3 V/K for open circuit voltage and 1.8·10–3 K–1 for fill factor of I-V. At temperature change of SC from 300 K to 350 K, the density of short circuit current decreases only by 4 %.


Among the factors that usually cause road accidents in the world is driver fatigue, which accumulates during the trip or is present even before it begins. One of the most common signs of fatigue or tiredness of a vehicle driver is yawning. The detection of signs of yawning in human behavior is potentially able to further characterize its state of fatigue. Computer image processing methods are actively used to detect the openness of the mouth and yawning for a person. However, this approach has many disadvantages, which include different environmental conditions and a variety of situational yawning options for different people. The paper presents a scheme of a detector for determining signs of yawning, which is focused on processing images of the driver’s face using data analysis methods, computer image processing, and a convolutional neural network model. The essence of the proposed method is to detect yawning in the driver’s behavior in the cabin of a vehicle based on the analysis of a sequence of images obtained from a video camera. It is shown that the driver’s yawning state is accompanied by a wide and prolonged openness of the mouth. Prolonged openness of the mouth signals the appearance of signs of yawning. A conceptual model for detecting the openness of the mouth for a vehicle driver is presented and a scheme for processing and labeling the YawDD and Kaggle Drowsiness Dataset datasets is developed. The developed convolutional neural network model showed an accuracy of 0.992 and recall of 0.871 on a test 10 % data set. The proposed scheme for detecting the yawning state has been validated on a test video subset extracted from the YawDD: Yawning Detection Dataset. This detection scheme successfully detected 124 yawns among all video files from the test dataset. The proportion of correctly classified objects is 98.2 % accuracy, precision is equal to 96.1 %, recall is 98.4 %, and F score is 97.3 % while detecting signs of yawning in driver behavior. Detecting signs of yawning in the driver’s behavior allows one to clarify information about the driver and thereby to increase the effectiveness of existing driver monitoring systems in the vehicle cabin, aimed at preventing and reducing the risk of road accidents. The proposed approach can be combined with other technologies for monitoring driver behavior when building an intelligent driver support system.
Cyber-physical systems’ security and safety assurance is a challenging research problem for Smart City concept development. Technical faults or malicious attacks over communication between its elements can jeopardize the whole system and its users. Reputation systems implementation is an effective measure to detect such malicious agents. Each agent in the group has its indicator, which reflects how trustworthy it is to the other agents. However, in the scenario when it is not possible to calculate the Reputation indicator based on objective characteristics, malicious or defective agents can negatively affect the system’s performance. In this paper, we propose an approach based on Game Theory to address the Reputation and Trust initial values calculation challenge. We introduced a mixed strategies game concept and a probability indicator. The possible outcomes of using different strategies by the system agents are represented with a payoff matrix. To evaluate the approach effectiveness, an empirical study using a software simulation environment was conducted. As a Cyber-physical system implementation scenario, we considered an intersection management system with a group of unmanned autonomous vehicles, the aim of which is to perform conflict-free optimal intersection traversal. To simulate the attack scenario, some vehicles were able to transmit incorrect data to other traffic participants. The obtained results showed that the Game Theory approach allowed us to increase the number of detected intruders compared to the conventional Reputation and Trust model.
The paper presents a model for assessing the state of transport hubs of public rail transport and investigates the dependence of the movement speed of urban rail transport on the influence of external random human-based factors. The study considers the following factors: the movement of other vehicles and pedestrians, repair on road sections, and the density of traffic of vehicles and pedestrians. The proposed model of the transport hub contains many intersections, rail traffic lines, and mixed traffic lanes within the framework of the traffic rules. The solution to the problem is based on the methodology of multi-agent systems. The basis of the proposed approach is the definition of the architecture of individual agents and the input parameters of the expected system responses. The software platform PTV Vissim, which allows building models of traffic flows with various types of vehicles, is used. During the simulation of the multi-agent system, a significant dependence of the speed of urban rail transport on the traffic density and the presence of repair work was revealed. A distinctive feature of the proposed approach is that it considers the influence of the human factor. The obtained approach can be used to design transport hubs for the unimpeded movement of unmanned urban rail transport.
An algorithm for detecting RFID-duplicates
Natalia V. Voloshina , Aleksandr A. Lavrinovich
The problem of using duplicate RFID tags by attackers is becoming more and more actual with the expansion of RFID technology for marking imported goods. The duplicate may contain information about goods, which differs from their actual characteristics. This paper proposes an algorithm for detecting RFID duplicates as a method for achieving the integrity of information that enters the information systems of international goods transportation. The relevance of creating an algorithm deals with the need to reduce the risk of creating and using RFID duplicates by importers during the cross-border movement of marked goods. Existing duplicate detection algorithms are unsuitable for use in the RFID-marking system of goods imported into Russia. The algorithm hinders an attacker from reading data from the original RFID tag, which is necessary to create an RFID duplicate. The proposed algorithm is based on dividing the EPC memory area of an RFID tag into parts and using the tag self-destruction command (kill) to prevent unauthorized readings. The authors considered the scenarios for implementing the algorithm and identified the risks of using the algorithm. The algorithm is presented as a graphical model based on BPMN notation. The efficiency of the proposed algorithm was evaluated using the hypergeometric probability formula. The results of a selective check of RFID tags by the customs authorities were taken as the initial data. It is shown that, in comparison with the existing approach, the implementation of the algorithm in software and hardware complex increases the probability of detecting RFID duplicates, provided that control is carried out only in relation to high-risk declarants. The use of the algorithm reduces the risk of receiving distorted or inaccurate data in the information systems dealing with international goods transportation and increases the validity of legal and economic decisions taken in the information systems of customs authorities.
Reduction of LSB detectors set with definite reliability
Roman A. Solodukha, Gennadiy V. Perminov , Igor V. Atlasov
The article focuses on decreasing the set of steganalytical methods that determine the payload value in the image spatial domain using quantitative detectors of Least Significant Bits (LSB) steganography. It is supposed that methods can trace the same image regularities, and hence their results can correlate. The work presents the results of the development and testing of the technique for reducing the set of steganalytical methods taking into account the accuracy and reliability to the diminution of the computational complexity of steganalytical expertise. The theoretical basis of the proposed solution is the approximation of regression of the first kind by linear regression of the second kind for multivariate random variables. To verify the results, the computational experiment was performed. The payloads were implemented in 10 % increments by automating the freeware steganographic programs CryptArkan and The Third Eye with AutoIt. The steganalytical methods, such as Weighted Stego, Sample Pairs, Triples analysis, Asymptotically Uniformly Most Powerful detection, Pair of Values, were used. The datasets were built in the MATLAB environment; the program was implemented in Python. For the experiment’s reproducibility, the datasets and program code are provided in Kaggle. Interval estimates of methods correlation are calculated based on experimental data for various payload values. The developed technique includes a mathematical model, an algorithm for implementing the model, and a computer program. The proposed technique can be applied in those tasks where accuracy and reliability are taken into account. One of the subject areas demanding such assessments is computer forensics dealing with expertise with probabilistic conclusions. These estimates allow the analyst to vary the number of methods depending on the available computing resources and the time frame of the research.
The authors propose a method for automatic classification of spatial objects in images under conditions of a limited data set. The stability of the method to distortions appearing in images due to natural phenomena and partial overlap of urban infrastructure objects is investigated. High classification accuracy, when using existing approaches, requires a large training sample, including data sets with distortions, which significantly increases computational complexity. The paper proposes a method for a two-step topological analysis of images. Topological features are initially extracted by analyzing the image in the brightness range from 0 to 255, and then from 255 to 0. These features complement each other and reflect the topological structure of the object. Under certain deformations and distortions, the object preserves its structure in the form of extracted features. The advantage of the method is a small number of patterns, which reduces the computational complexity of training compared to neural networks. The proposed method is investigated and compared with the modern neural network approach. The study was performed on a DOTA dataset (Dataset for Object deTection in Aerial images) containing images of spatial objects of several classes. In the absence of distortion in the image, the neural network approach showed a classification accuracy of over 98 %, while the proposed method achieved about 82 %. Further distortions such as 90 degree rotation, 50 % narrowing and 50 % edge truncation and their combinations were applied in the experiment. The proposed method showed its robustness and outperformed the neural network approach. In the most difficult combination of the test, the decrease in classification accuracy of the neural network was 46 %, while the described method showed 12 %. The proposed method can be applied in cases with a high probability of distortion in the images. Such distortions arise in the field of geoinformatics when analyzing objects of various scales, under different weather conditions, partial overlap of one object with another, in the presence of shadows, etc. It is possible to use the proposed method in vision systems of industrial enterprises for automatic classification of the parts that belong to superimposed objects.
Big data cybersecurity has garnered more attraction in recent years with the development of advanced machine learning and deep learning classifiers. These new classifier algorithms have significantly improved Intrusion Detection Systems (IDS). In these classifiers, the performance is positively influenced by high relevant features while less relevant features negatively influence the performance. However, considering all the attributes, especially the high dimensional attributes, increases computational complications. Hence it is essential to diminish the dimensionality of the attributes to improve the classifier performance. To achieve this objective, an efficient dimensionality reduction approach is presented through the development of the Fuzzy Optimized Independent Component Analysis (FOICA) technique. The standard Independent Component Analysis (ICA) is coupled with the fuzzy entropy to transform the high dimension attributes into low dimension attributes and helps in selecting high informative low-dimensional attributes. These selected features are fed to efficient hybrid classifiers namely Hyper-heuristic Support Vector Machines (HH-SVM), Hyper-Heuristic Improved Particle Swarm Optimization based Support Vector Machines (HHIPSO-SVM) and Hyper-Heuristic Firefly Algorithm based Convolutional Neural Networks (HHFA-CNN) to classify the cybersecurity data to identify the intrusions. Experiments are conducted over two cybersecurity datasets and real-time laboratory data whose outcomes specify the supremacy of the suggested IDS model based on FOICA dimensionality reduction.
An optimal swift key generation and distribution for QKD
Mallavalli Raghavendra Suma, Perumal Madhumathy
Secured transmission between users is essential for communication system models. Recently, cryptographic schemes were introduced for secured transmission and secret transmission between cloud users. In a cloud environment, there are many security issues that occur among the cloud users such as, account hacking, data breaches, broken authentication, compromised credentials, and so on. Quantum mechanics has been implemented in cryptography that made it efficient for strong security concerns over outsourced data in a cloud environment. Therefore, the present research focuses on providing excellent security for cloud users utilizing a swift key generation model for QKD cryptography. The Quantum Key Distribution (QKD) is an entirely secure scheme known as Cloud QKDP. Initially, a random bit sequence is generated to synchronize the channel. An eavesdropper will not permit to synchronize parameters between them. In this key reconciliation technique, the random bit sequence is concatenated with the photon polarisation state. BB84 protocol is improved by optimizing its bit size using FireFly Optimization (FFO) at the compatibility state, and in the next state, both transmitter and receiver generate a raw key. Once the key is generated, it is then used for the transmission of messages between cloud users. Furthermore, a Python environment is utilized to execute the proposed architecture, and the accuracy rate of the proposed model attained 98 %, and the error rate is 2 %. This proves the performance of the proposed firefly optimization algorithm based swift key generation model for QKD performs better than previous algorithms.
The widespread increase in the volume of processed information at the objects of critical information infrastructure, presented in text form in natural language, causes a problem of its classification by the degree of confidentiality. The success of solving this problem depends both on the classifier model itself and on the chosen method of feature extraction (vectorization). It is required to transfer to the classifier model the properties of the source text containing the entire set of demarcation features as fully as possible. The paper presents an empirical assessment of the effectiveness of linear classification algorithms based on the chosen method of vectorization, as well as the number of configurable parameters in the case of the Hash Vectorizer. State text documents are used as a dataset for training and testing classification algorithms, conditionally acting as confidential. The choice of such a text array is due to the presence of specific terminology found everywhere in declassified documents. Termination, being a primitive demarcation boundary and acting as a classification feature, facilitates the work of classification algorithms, which in turn allows one to focus on the share of the contribution that the chosen method of vectorization makes. The metric for evaluating the quality of algorithms is the magnitude of the classification error. The magnitude of the error is the inverse of the proportion of correct answers of the algorithm (accuracy). The algorithms were evaluated according to the training time. The resulting histograms reflect the magnitude of the error of the algorithms and the training time. The most and least effective algorithms for a given vectorization method are identified. The results of the work make it possible to increase the efficiency of solving real practical classification problems of small-volume text documents characterized by their specific terminology.
The paper proposes a new solution for recognizing the emotional state of a person (joy, surprise, sadness, anger, disgust, fear, and neutral state) by facial expression. Along with traditional verbal communication, emotions play a significant role in determining true intentions during a communicative act in various areas. There is a large number of models and algorithms for recognizing human emotions by class and applying them to accompany a communicative act. The known models show a low accuracy in recognizing emotional states. To classify facial expressions, two classifiers were built and implemented in the Keras library (ResNet50, MobileNet) and a new architecture of a convolutional neural network classifier was proposed. The classifiers were trained on the FER 2013 dataset. Comparison of the results for the chosen classifiers showed that the proposed model has the best result in terms of validation accuracy (60.13 %) and size (15.49 MB), while the loss function is 0.079 for accuracy and 2.80 for validation. The research results can be used to recognize signs of stress and aggressive human behavior in public service systems and in areas characterized by the need to communicate with a large number of people.
The paper proposes an approach to managing the personnel development in service-oriented IT companies, which is based on a parametric model of training highly qualified personnel and implemented using intelligent algorithms. The parameterization of the training model is carried out on the rough sets theory. The implementation of intelligent algorithms required the following technologies: thematic modeling with additive regularization (Additive Regularization Thematic Model), a special environment for the current development of the configuration of information systems and the analysis of formal concepts (Formal Concept Analysis). The approach was developed as a set of techniques and implemented as a software library. The efficiency assessment was carried out on a dataset that contains the results of processing for 2,948 service requests processed by employees of a service-oriented IT company in 4 months. The results of the experimental evaluation showed that the use of the developed set of methods and the library of software tools increased the efficiency of the work of service engineers in terms of key indicators from 31 to 54 %. Application of the developed approach will make it possible to quickly adapt personnel qualifications in service-oriented IT companies in the context of a rapid change in production tasks and work environment without interrupting the work process.


In sailing conditions under the rolling, dynamic compass errors may appear due to the influence of the redistributed magnetic masses of the ship, as well as of centripetal and tangential accelerations. The influence of these errors can be compensated by introducing a correction system into the measuring circuit of the compass, which uses one gyroscopic angular rate sensor. The correction method is based on the use of a gyroscopic angular rate sensor with a vertical axis of sensitivity in the circuit. The generated signal is the difference between the output reading of the magnetic compass and the integrated signal of the angular rate sensor, which is insensitive to the action of translational acceleration and redistributed magnetic masses during the ship’s heel. The resulting difference will contain the compass error from the rolling effect, further compensated for in the compass output signal. The choice of the parameters of the dynamic links of the circuit for the implementation of the correction system and the study of its operation was carried out by simulation method using the proposed analytical expression for the error of the magnetic compass from the rolling effect. Comparison of the simulation results with the results of experimental studies of the compass was carried out using a specialized stand that simulates the yaw of a ship and allows one to change the value of the vector of the Earth’s magnetic field, when exposed to the magnetic system of the compass. Experimental studies of the compass showed that the pitching error correction coefficient is in the range 0.16-0.48 (average value is 0.35), when the object’s angular oscillation periods change from 6 to 28 s, which characterizes the degree of pitching suppression, and when modeling, the average value of the coefficient is 0.21. The underestimation of the correction factor in modeling is due to the lack of taking into account the dynamic properties of the compass rose and depends on the ratio of the periods of natural oscillations of the compass and the periods of disturbing influences. The results confirmed the high efficiency of the considered compass correction system and the required quality of the developed specialized stand for evaluating its work. The results of the study can be used in the development of modern magnetic compasses to ensure high accuracy of directional guidance through the use of the proposed correction system.
The paper proposes a new analytical model of the drain current in AlGaN-GaN high-electron-mobility transistors (HEMT) on the basis of a polynomial expression for the Fermi level as a function of the concentration of charge carriers. The study investigated the influence of parasitic resistances (source and drain sides), high-speed saturation, the amount of aluminum in the AlGaN barrier, and low field mobility. To isolate the output characteristics, cut-off frequency and steepness, the parameters of the hyper frequency signal were developed. Comparison of analytical calculations with experimental measurements confirmed the validity of the proposed model.
Imputation and system modeling of acid-base state parameters for different groups of patients 
Dmitry I. Kurapeev, Mikhail S. Lushnov, Tianxing Man , Zhukova Nataliya A.
The paper investigated the possibility of correct replacement of missing values in sets of acid-base state in the artery and vein in different groups of patients with different outcomes of the disease: “discharged”, “died”, “transferred to another medical institution”, as well as prospects for the application of individual optimization multidimensional estimates of these biomedical parameters in the form of projections on a one-dimensional space. The relevance of the above tasks is determined by the need for the full use of medical data in the analysis of large repositories of information of medical organizations and the provision of verified multidimensional assessments of biomedical systems to doctors from a large range of patient health indicators. A statistical method has been applied to verify the correctness of imputation data sets using discriminant analysis procedures. Further, the imputed data set was processed to obtain a symmetric correlation matrix optimized in a certain way and the accompanying logarithms of criterion functions that are individual system assessments of the condition of each patient in different groups of patients at a certain point in the study. After that, to identify differences in the logarithms of the criterion functions of the acid-base state parameters between groups of patients with different outcomes, the authors used the method of calculating multidimensional Hotelling T2 statistics. The correctness of the application of discriminant analysis procedures to verify the imputation of data sets is shown. Differences in the logarithms of the criteria functions of the acid-base state indicators between venous and arterial blood by patient outcome groups were revealed. Significant differences in the parameters of acid-base state based on the multidimensional statistics of T2 Hotelling between groups of patients with different outcomes were revealed. It is found that data imputation significantly increases the volume and representativeness of the sample under study. It is demonstrated that the substituted data make it possible to carry out a systematic statistical assessment of the totality of body parameters based on the calculation of the logarithms of the criterion functions of the acid-base state. Such logarithms make it possible to reliably distinguish patients in different groups of patients by outcomes in three groups: “discharged”, “deceased”, “transferred to another medical institution”. 100 % differences of biochemical parameters according to the multivariate T2 Hotelling statistics between these three groups of patients with COVID-19 are shown. The results of the study can be applied in the development of information systems of individual medical biochemical and hematological devices and analyzers and corresponding artificial intelligence systems in the future.
The paper considers an approach in terms of optimal speed problem for Dubins cars to the formation of control trajectories of moving objects (airplanes, ships), that have control restrictions, under external influences that are constant in magnitude and direction and constant control values at each part of the trajectory. Instead of solving the Pontryagin maximum principle, it is proposed to use a simple comparison of possible control strategies in order to determine the best among them in terms of speed. For each strategy, the calculation of control switching points on the trajectory is based on minimizing the difference between the specified coordinates of the endpoint and the coordinates of the point at which the trajectory comes, depending on the choice of the parameters of two intermediate control switching points. The problem of finding the best speed trajectory for an object from one point to another is solved using the Dubins approach, and their coordinates and heading angles are given for both points. All calculations were carried out taking into account wind and water disturbances, which are constant in magnitude and direction and distort the trajectory. The problem of finding the Dubins paths is reduced to finding the parameters of two intermediate points at which the control changes. Different possibilities for changing controls are considered, taking into account the existing restrictions. The lengths of the trajectories are calculated and the best travel time is selected. The proposed method considers several trajectories acceptable in terms of constraints, taking into account the external influences, from which the optimal path is selected by comparison. Having multiple feasible strategies is beneficial when choosing a trajectory depending on the environment. Instead of solving the problem of nonlinear optimization of the Pontryagin maximum principle, a simple comparison of possible control strategies is used in order to determine the best among them in terms of speed, each of the possible strategies is sought from the condition of minimizing the residual of the analytical solution and the boundary condition at the end of the trajectory. When searching for possible trajectories, control constraints, the influence of external influences, that are constant in magnitude and direction, and the constancy of the control value at each part of the trajectory are taken into account. And all these factors make it possible to sufficiently and adequately simulate the movement of the ship. Physically, restrictions on control (turning radius) are associated with a limited steering angle. Restrictions can be associated not only with restrictions on the turning radius, but also with geographical features of a specific area: for unmanned aerial vehicles this may be due to the buildings and terrain, and for ships this may be due to the coastline, shoals, islands, etc. In this regard, it may turn out that the solution found optimal in terms of speed cannot be realizable in practice. Then the method proposed in the work has the ability to choose another trajectory among the less optimal in terms of speed.
The paper proposes a mathematical model of the epidemic process, taking into account the dependence of the rates of cure and loss of immunity on time. Today, mathematical models of epidemics based on the basic Kermack–McKendrick model have become widespread. The most famous models are Susceptible-Infected-Recovered (SIR) and Susceptible-Exposed-Infected-Recovered (SEIR). These models are based on dividing the population into separate groups that are in different epidemic conditions. The description of the models is based on differential equations similar to the equations of birth and death in the process of radioactive transformations of elements in a radioactive chain. However, this approach does not take into account the dependence of the probabilities of the transition of the population from group to group on time spent in the treatment process and in the process of loss of acquired immunity. The known models do analyze the nature of the course of the epidemic for long periods of time, when the process can enter a stationary state. The paper proposes a mathematical model based on dividing the population into separate groups. The first group consists of healthy people susceptible to infection due to contact with members of the second group, which includes the infected population. Members of the third group are being treated, the fourth group includes members of the society who have recovered with antibodies and are vaccinated. The fifth group consists of deceased members of society. In contrast to the SIR and SEIR models, the proposed approach takes into account that immunity is lost over time, and people who survived again move to the group susceptible to infection. The dependences of the probabilities of transition from group to group on time spent both during the treatment process and in the loss of acquired immunity have been taken into account. Thus, the proposed mathematical model is based on five integro-differential equations, two of which are partial differential equations. A new mathematical model has been formulated that makes it possible to take into account the dependence of the cure rate and the probability of transition from the vaccinated state to the initial state on time spent in the corresponding state. It is shown that the proposed model is autocatalytic. With increasing time, a state of bistability is observed, when, under the same boundary conditions, two stable states coexist. Switching between states is determined by the epidemic spread control parameter found in the work. One of the stable states is stationary and leads to the end of the epidemic, the second one leads to the population’s extinction. It has been shown that, for the stationary regime, the form of the distribution function in terms of treatment time and exposure time in the vaccinated state does not affect the final result in any way. The conditions for suppressing the epidemic for managing the process of its development are formulated. Numerical experiments were carried out to simulate the epidemic spreading process, taking into account the constancy of all transition probabilities. Integration of the original system of equations was carried out using the Radaus algorithm for stiff differential equations. The results of numerical simulations have confirmed that the experimental data agree with the theory of the control parameter. The results of the work can be used to organize the management of the epidemic spreading process in order to suppress it as soon as possible by changing the value of the control parameter.
The paper studies the regularities of the pulsed outflow of air and fine powder mixture, which partially fills the ejection cylindrical channel, in both one-dimensional and two-dimensional formulations. The dynamics of a gas-dispersed medium are described in the framework of the Eulerian continuum approach with different velocities and temperatures of gas and powder particles. Analytical self-similar solutions are constructed in the equilibrium approximation. For the numerical solution of the problem, a hybrid large-particle method of the second order of accuracy in space and time is used. Comparison of exact self-similar and numerical solutions confirmed the reliability of the method. The outflow of the mixture of high-pressure gas and powder particles has a pronounced wave character, which is associated with the decomposition of the initial discontinuity, the movement, and refraction of waves at the interface of media inside the channel, as well as the reflection of waves from its bottom. The characteristic time intervals of the wave process and the corresponding distributions of gas-dynamics quantities are established. Depending on the generalized self-similar variable, the pressure, density, and velocity of the mixture are monotonic functions, and the profile of the specific (per unit cross-section) mass flow has a maximum in the critical section. Dimensionless parameters and specific mass flow of a two-phase medium in the outlet section of the discharge channel are determined. In the case of a channel limited by the size of the high-pressure chamber, a two-dimensional physical picture of the formation and evolution of a gas-dispersed mixture was studied. At the initial stage of the outflow, an “anomalous” grouping of powder particles is observed with the formation of a shock-wave structure in the subsonic mode of the carrier gas flow. After the powder layer leaves the channel, the pure gas flowing out of it accelerates to supersonic speed and an intense vortex motion develops in the wake of the gas-dispersed jet. The calculated values of the parameters allow us to justify the achievable level of technical characteristics (speed, mass flow rate) of the flow of the working gas-dispersed medium of pulsed powder devices. The proposed methodology and the results obtained are the basis for making rational decisions at the early stages of design and preparation of initial data on design and operating parameters for testing prototypes of pulsed powder technical devices.
Vectorized numerical algorithms for the solution of continuum mechanics problems
Nikita A. Brykov, Konstantin N. Volkov, Vladislav N. Emelyanov
The aim of the work is to study the possibilities provided by new information technologies, object-oriented programming tools and modern operating systems for solving boundary value problems of continuum mechanics described by partial differential equations. To discretize the basic equations, we applied the method of finite differences and finite volumes, which are widely used to solve problems in the mechanics of liquids and gases. The paper considers the implementation of the finite difference methods and the finite volume method with vectorized grid structures, including access to the inner and boundary cells of the grid, as well as the features of the implementation of algorithms at singular points of the computational domain. To solve boundary value problems described by partial differential equations, we developed an approach to the construction of vectorized algorithms and considered the features of their software implementation in the MATLAB package. Vectorization in such tasks, excluding nested loops, is ensured by appropriate data organization and the use of vectorized operations. On the one hand, the developed algorithms widely use MATLAB functions designed for processing vectors and sparse matrices, and on the other hand, they are distinguished by high efficiency and computation speed, comparable to those of programs written in C/C++. The main results imply the numerical solution of a number of problems in continuum mechanics associated with the calculation of stresses in a separate body and the calculation of the field of velocity and temperature in the flow of a viscous incompressible fluid. The features of discretization of the basic equations and the implementation of the corresponding finite-difference and finite-volume algorithms are shown. The use of the MATLAB system opens up new possibilities for the formalization and implementation of finite-difference and finite-volume methods for the numerical solution of boundary value problems in continuum mechanics. Despite the fact that the capabilities of the developed algorithms are illustrated by the example of solving fairly simple problems, they admit a relatively simple generalization to more complex problems, for example, solving the Euler equations and Navier–Stokes equations. As part of the work, computational modules were prepared with user programming tools that expand the capabilities of the package and are focused on solving problems in continuum mechanics.
Precise modelling and accurate estimation of long-term evolution (LTE) channels are essential for numerous applications like video streaming, efficient use of bandwidth and utilization of power. This deals with the fact that data traffic is increasing continuously with advances in Internet of things. Previous works were focused mainly on designing models to estimate channel using traditional minimum mean square error (MMSE) and least squares (LS) algorithms. The proposed model enhances LTE channel estimation. The designed model combines LS and MMSE methods using Taguchi genetic (GE) and Particle Swarm Intelligence (PSO) algorithms. We consider LTE operating in 5.8 GHz range. Pilot signals are sent randomly along with data to obtain information about the channel. They help to decode a signal in a receiver and estimate LS and MMSE combined with Taguchi GA and PSO, respectively. CI-based model performance was calculated according to the bit error rate (BER), signal-to-noise ratio and mean square error. The proposed model achieved the desired gain of 2.4 dB and 5.4 dB according to BER as compared to MMSE and LS algorithms, respectively.


Implementation of a clinical decision support system to improve the medical data quality for hypertensive patients
Mikhail V. Ionov, Ekaterina V. Bolgova , Zvartau Nadezhda E., Natalia G. Avdonina, Marina A. Balakhontceva, Kovalchuk Sergey V., Konradi Alexandra O.
The digitalization of healthcare relies heavily on data analytics from medical information systems. Such systems aggregate information from heterogeneous sources, including electronic medical records. Improving the quality of data from electronic medical records is a modern challenge for developers of medical information systems. The authors have designed a decision support system with an expanded set of auxiliary functions to solve the problems of human-computer interaction, increasing the completeness and reliability of medical information. In this paper, the applicability of the existing decision-making system is investigated on the example of medical data of patients with arterial hypertension. The testing of the decision support system among medical specialists was carried out. The impact of the implementation of the system on the number of errors when filling out an electronic medical record was assessed. A software module was created integrated into the working version of the medical information system in the Almazov National Medical Research Centre. Test implementation of the system made it possible to reduce the number of errors and increase satisfaction with the information presented in patients with arterial hypertension.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.