Summaries of the Issue


Recently, television cameras operating in the near-infrared range have become increasingly widespread. The advantage of shooting in the short wave infrared range is the ability to observe objects in low light and difficult weather conditions. Such cameras can use hybrid sensors that consist of an infrared photocathode and an electron-sensitive matrix based on silicon in a single vacuum volume. The paper investigates the capabilities of one of the latest samples of a similar camera, created at JSC “NRI Electron”. The images of a human, water, ice, snow and other objects in the wavelength range of 0.95–1.7 microns have been analyzed. The images were taken using a television camera based on a hybrid television infrared sensor, which consists of a photocathode and a silicon charge-coupled device in proximity range to each other in a single vacuum volume. Illumination of the objects in the near-infrared range was produced by a continuous spectrum lamp with a maximum of the detected radiation at a wavelength of 1.55 μm. The authors compared the images obtained in the near-infrared range with the ones obtained in the visible region. An explanation is given for the differences between snow images and water and ice images in the near-infrared range. As an example, the difference in light transmission between the surfaces of materials for diving equipment, such as coated and open cell neoprene, is demonstrated. Due to a significant contrast shown in the near-infrared range by images of various objects on the surface of water and ice, it is possible to create an effective system for searching objects on the water. The paper discusses the advantages of the proposed visual search system compared to other systems, including passive systems and the ones operating in the MWIR and LWIR. The research outlined the prospects of using the new camera for building an effective search system for objects and people on water surfaces
The most effective way to improve the optical properties of silicon-based solar cells is to form the textures on their surface. In this paper, the authors studied the influence of geometric sizes of periodical pyramidal textures, which are formed on the surface of a silicon-based solar cell, on its photoelectric properties. Through optics theories, it was determined that the angle at the base of the pyramid should be equal to 73°7ʹ12ʺ. But, using the Sentaurus TCAD program, it was found that the angle at the base of pyramid should be 70°21ʹ0ʺ, in order to reach the maximum efficiency. Because the model takes into account all the electric, optic and thermic properties of the solar cell. The modeling identified that the output power of the simple planar silicon-based solar cell was equal to 6.13 mW/cm2, the output power of the solar cell, which was covered with the pyramidal texture with height of 1.4 μm, was equal to 10.62 mW/cm2. It was found that the efficiency of the solar cell increases by 1.6 times, when it is covered with pyramids with the angle at the base of pyramid equal to 70°21ʹ0ʺ.
Scintillation gamma radiation sensors based on solid-state photomultipliers in wireless industrial internet networks
Ilya O. Bokatyi, Victor M. Denisov, Andrey V. Timofeev, Alexander B. Titov, Joel Jose Puga Coelho Rodrigues , Korotaev Valery Viktorovich
The article examines the principles of developing wireless networks of autonomous gamma sensors in order to create systems for spatial environmental radiation monitoring. The main task of such systems is to control the level of gamma radiation in areas where potential sources of ionizing radiation are located. An autonomous gamma-ray spectrometer is used as a measuring sensor. The authors propose to apply measuring sensors based on a silicon photomultiplier to create autonomous wireless networks of the industrial Internet for radiation monitoring. To confirm the possibility of using this class of receivers as part of gamma spectrometers, the main structural elements of the system were modeled, and the experimental model of the gamma spectrometer was prototyped. The linearity and energy resolution of the experimental sample were also investigated. To test the model for constructing a gamma spectrometer, a CsI (Tl) scintillation crystal and a Sensl Array-60035-4P photomultiplier were used. The established range of recorded energies is in the range from 121 keV to 1332 keV, the relative energy resolution for the 137Cs peak is 11.07 %, the linearity of the transfer characteristic is 99.91 %. Based on this sensor, the architecture of an automated wireless system for monitoring the spatial distribution of gamma radiation has been developed. The results of the work allow the use of radiation monitoring systems in accordance with the requirements of Industry 4.0.


Modern control systems use digital networks for data transmission. Such systems have random delays and loss of data packets. The aim of the research is to study the impact of data buffering on the quality of process control focusing on systems with a limited buffer volume for data packets by simulation modeling and compensation of this influence using the Smith predictor. A distinctive feature of the proposed solution is the compensation for accidental delay. To improve the network management quality of technological processes, the authors proposed to use the Smith predictor. The Smith predictor includes a model of the object and a buffer for data packets. The buffer is used to generate a random delay time. Its operation is determined by the mode of data transmission over the network channel. The simulation of the functioning of the network control system was performed in the Simulink environment of the Matlab system. The novelty of the developed simulation model lies in the fact that its development is based on modeling the time break of the information flow. The simulation was carried out for the volumes of data packet buffers ranging from 1 to 5 and the probability of data transmission ranging from 0.9 to 0.4. The results of the study proved that the use of the Smith predictor to compensate for random delay significantly increases the quality of transients of network control systems. It is shown that the use of the Smith predictor significantly improves the quality of network systems. The developed simulation models can be used in the design of new networked control systems and in the modernization of the systems already used in practice.
The paper considers an approach to the formation of control program trajectories of moving objects (UAVs, ships) as a solution to the optimal problem in terms of Dubins path search. Instead of directly solving the Pontryagin’s maximum principle, it is proposed to use a simple analysis of possible control strategies in order to determine among them the optimal one in terms of time spent on a trajectory. The problem of finding the shortest trajectory of movement of an object from one point to another is solved, and for both points their coordinates and heading angles at these points are given, as well as three absolute values of the circulation radii corresponding to the given control signals on each of the three sections of the trajectory. The problem of finding the Dubins curves is reduced to determining the parameters of two intermediate points at which the control changes. All possible directions of control change options are considered, taking into account the existing constraints, also the lengths of the corresponding motion trajectories are calculated, and the optimal one is selected. The problem of constructing a trajectory is solved as well, which ensures a smooth conjugation of two linear fragments of trajectories and passes through the point of their intersection. The solution of the optimal trajectory problem using the Dubins car gives a single trajectory. In contrast to this, the proposed method considers several trajectories admissible by the constraints, from which the optimal one is selected by exhaustive search. The presence of several feasible strategies gives advantages for each specific situation of choosing a trajectory depending on the environment. Instead of directly solving the Pontryagin’s maximum principle and constructing a three-dimensional optimal trajectory, the authors used a simple analysis of possible control strategies in order to determine among them the optimal one in terms of elapsed time. The approach was motivated by the limited number of possible control strategies for Dubins paths, as well as the simplicity of analytical calculations for each of them, which allows performing these calculations in real time. The high speed of calculations for the problem of determining the optimal trajectory is due to the fact that the proposed method does not require complex calculations to solve the problem of nonlinear optimization, which follows from the Pontryagin’s principle.


The paper presents an experimental study of morphological changes of the Si(100) surface under electromigration conditions that was carried out using the in situ method involving ultrahigh vacuum reflection electron microscopy. The study aims to determine the temperature dependence of the effective electric charge of an adsorbed atom on the Si(100) surface. A system of concentric two-dimensional vacancy islands was formed on the surface of Si(100) samples by low-energy argon ion sputtering and subsequent high-temperature annealing. Quasi-equilibrium conditions were created on the sample surface by compensating the sublimating flow from the surface from an external silicon source. The video images of the drift of vacancy islands were recorded under the conditions of electromigration with the compensation of sublimation. Based on the processing and analysis of video images, the authors described the dependence of the velocity of motion of vacancy islands on the Si(100) surface for various temperatures and the direction of the electric current along and across the dimer rows with the (2 × 1) superstructure inside the island. It is shown that the drift rate of vacancy islands does not depend on their size under quasi-equilibrium conditions. A simplified one-dimensional theoretical model has been constructed. It includes one atomic step moving by a detachment of atoms from the step and their drift under the force of electromigration in the absence of desorption and deposition of atoms on the surface. Based on the proposed model, the effective electric charge is estimated, and the temperature dependence of the effective charge in the temperature range of 1010 to 1120 °C is obtained. The absolute value of the effective charge decreases linearly with increasing temperature. The sign of the effective charge is negative, and its average value is Z = –0.5 ± 0.3 elementary charges. The obtained results can be used for creating structures with a countable number of atomic steps and act as secondary measures of height with reference to the silicon crystal lattice.
A study of the photocatalytic properties of chitosan-TiO2 composites for pyrene decomposition
Danila A. Tatarinov, Sofia R. Sokolnikova, Natalia A. Myslitskaya
 In this work, nano- and microcomposites of chitosan-TiO2 were developed for the photocatalytic decomposition of pyrene, which is one of polycyclic aromatic hydrocarbons. TiO2 nanoparticles were synthesized by laser ablation, and their sizes were determined using the photon correlation spectroscopy method. Nano- and microcomposites based on chitosan with different TiO2 particle contents were manufactured. The work studies the effect of nano- and microparticles of TiO2 in the composition of manufactured nanocomposites on the photodegradation of pyrene in model solutions of dimethyl sulfoxide under ultraviolet radiation. To assess the decrease in pyrene concentrations in solutions, the authors used the method of luminescent analysis. Based on the results of the conducted studies, pseudo-first-order kinetic graphs for pyrene degradation in solutions were plotted. The analysis proves the efficiency of the obtained chitosan-TiO2 composites for the photocatalytic decomposition of pyrene. In 60 minutes, 68 % and 55 % of pyrene were photodegraded under ultraviolet irradiation using chitosan-TiO2 composites with TiO2 nanoparticles and with TiO2 microparticles, respectively. The developed chitosan-TiO2 composites are prospective photocatalytic materials for the decomposition of polycyclic aromatic hydrocarbons in aqueous media. The method of manufacturing composites does not require expensive equipment, and they are also convenient for performing photocatalytic reactions.
The quality of the epitaxial structures is significantly influenced by both the initial surface morphology and its transformation during growth. One of the phenomena of surface roughness of silicon occurring during annealing, growth, exposure to electric current, and adsorption of foreign material is the formation of step bunches. The paper presents experimental studies of the transformation kinetics of an atomic step bunches shape on the Si(001) surface that were carried out under electromigration conditions when heated by a constant electric current down the steps in the temperature range of 1000–1150 °С. The samples were annealed in an ultra-high vacuum chamber of a reflection electron microscope, followed by quenching to room temperature. The dependence of the average distance between steps on the number of the bunch steps was observed using an atomic force microscope under atmospheric conditions. It was found that the experimentally obtained dependence obeys a power law (l ∝ Nα), where α varies from –0.68 to –0.36. The study confirmed the change in the elastic interaction potential of steps in bunches with the increase in temperature. The results of the work advance understanding of a bunching process of Si(001) at elevated temperatures.
Abnormal diffusion profile of adatoms on extremely wide terraces of the Si(111) surface
Evgeniia O. Soloveva, Dmitry I. Rogilo, Dmitry V. Sheglov, Alexander V. Latyshev
The authors experimentally studied the distribution of the adatom concentration on extremely wide terraces of the Si(111) surface whose dimensions are comparable with the diffusion length of adatoms. The extremely wide terraces were created during in situ experiments carried out by ultrahigh vacuum reflection electron microscopy by high-temperature annealing of Si(111) samples (more than 1000 °C) followed by rapid cooling to 750 °C to form 7 × 7 superstructure domains. A detailed analysis of the surface morphology of the terraces was carried out by ex situ atomic force microscopy under ambient conditions. Based on high-resolution (1.2 nm/pixel) atomic force microscopy images, panoramic topographic images of the terraces were formed. Digital processing of the panoramic images visualized the distribution of the adatom concentration n. For a terrace cooled from 1070 °C, central terrace regions show minimum n values around 0.13 BL (1 bilayer (BL) = 1.56×1015 cm−2); close to the monatomic step bordering the terrace, n increases to about 0.14 BL. The authors determined that this radial distribution n(r) at 1070 °C corresponds to the adatom diffusion coefficient D = 59 ± 12 μm2/s. It was found that, for a terrace cooled from 1090 °C, the approach assuming the same adatom diffusion length over the entire terrace does not describe the experimental n(r) distribution. For its analysis, the authors used the solution of the stationary diffusion equation under the assumption that D is not constant. Based on a numerical solution, the dependence of D on the experimentally measured n values was obtained. Under the assumption that adatom lifetime does not depend on n at 1090 °C, the adatom diffusion coefficient was found to decrease from 140 μm2/s at n = 0.093 BL (in the central terrace regions) to 5 μm2/s at n = 0.118 BL (near the step). The results of this work experimentally demonstrated that the control over adatom concentration can be used to significantly vary the diffusion properties of the adsorption layer on the crystal surface.


An experimental methodology for assessing the probability and danger of network attacks in automated systems
Irina G. Drovnikova, Elena S. Ovchinnikova, Anton D. Popov, Ilya I. Livshitz, Oleg O. Basov, Evgeniy A. Rogozin
The paper proposes a new method of conducting an experiment to assess the dynamics of the information conflict “Network attack – Protection system” in automated systems. As a result of the application of the methodology, quantitative values of the initial data necessary for assessing the probability and danger of network attacks in automated systems were obtained. The research method implied an experiment that displayed the dynamics of the information conflict “Network attack – Protection system” in automated systems. The authors developed a methodology to determine the quantitative values of the characteristics, as well as the amount of damage from standard network attacks that affect the elements of automated systems. The use of the results makes it possible to observe the course of the information conflict “Network attack – Protection System” in dynamics, to calculate the probabilistic and temporal characteristics of network attacks and to carry out an accurate quantitative assessment of the danger of their implementation in automated systems in the “CPN Tools” and MathCad software environments. The prospects for using the obtained results deal with the construction of particular models of actual attacks and increase of stability of automated systems.
In recent years, the task of selecting and tuning machine learning algorithms has been increasingly solved using automated frameworks. This is motivated by the fact that when dealing with large amounts of data, classical methods are not efficient in terms of time and quality. This paper discusses the Auto-sklearn framework as one of the best solutions for automated selection and tuning machine learning algorithms. The problem of Auto-sklearn 1.0 solution based on Bayesian optimization and meta-learning is investigated. A solution to this problem is presented. A new method of operation based on meta-database optimization is proposed. The essence of the method is to use the BIRCH clustering algorithm to separate datasets into different groups. The selection criteria are the silhouette measure and the minimum number of initial Bayesian optimization configurations. The next step uses a random forest model, which is trained on a set of meta-features and the resulting labels. Important meta-features are selected from the entire set. As a result, an optimal set of important meta-features is obtained, which is used to find the initial Bayesian optimization configurations. The described method significantly speeds up the search for the best machine learning algorithm for classification tasks. The experiments were conducted with datasets from OpenML to compare Auto-sklearn 1.0, 2.0 and a new version that uses the proposed method. According to the results of the experiment and statistical Wilcoxon T-criterion tests, the new method was able to outperform the original versions in terms of time, outperforms Auto-sklearn 1.0 and competes with Auto-sklearn 2.0. The proposed method will help to speed up the time to find the best solution for machine learning tasks. Optimization of such frameworks is reasonable in terms of saving time and other resources, especially when working with large amounts of data.
 In this paper, we propose a method for automatically determining the structure of the tree and the key topics of nodes in the process of building a dialog tree based on unmarked text corpora. Building a dialog tree is one of the time-consuming tasks when creating an automatic dialog system and in most cases is performed on the basis of manual markup, which takes a lot of time and resources. The method of hierarchical clustering of dialogs takes into account the semantic proximity of messages, allows one to allocate a different number of nodes at each level of the hierarchy and limit the dialog tree in width and depth. The algorithm for constructing annotations of nodes of the dialog tree takes into account the hierarchy of topics by building thematic chains. The method is based on the complex use of natural language processing methods (tokenization, lemmatization, part-of-speech tagging, word embeddings, etc.), analysis of the main components to reduce the dimension and methods of cluster analysis. Experiments on constructing the structure of the dialog tree and annotating nodes have shown the great possibilities of the proposed method for constructing an automatic dialog tree. The recognition accuracy on the example of the reference dialog tree containing 13 nodes at the first level, 381 nodes at the second level and 299 nodes at the third level was 0.8, 0.7 and 0.5, respectively. Automatic construction of dialog trees can be in demand when developing automatic dialog systems and for improving the quality of generating answers to user questions.
Generic programming with combinators and objects
Dmitry S. Kosarev, Dmitry Yu. Boulytchev
The generic programming approach is popular in functional languages (for example, OCaml, Haskell). In essence, it is a compile-time generation of code that performs transformations of user-defined data types. The generated code can carry out various kinds of transformations of the values. Usually, transformations are represented as functions that implement algorithms of transformation. New transformations could be built from these transformation functions and user-defined ones. The representation based on functions has a downside: functions behave as final representations, and hence it is not possible to modify the behavior of the already built function. If the current set of transformations does not suit well, software developers are obliged to write a completely distinct transformation, even in the case when the new transformation is almost identical to the existing one. This work proposes to build transformations that are extensible after construction. The object-oriented programming paradigm will be followed, transformations will be represented not as functions but as objects. Instead of calling a transformation function, one of the object’s methods will be called. The transformation itself will be split into methods. The extensibility is supported by adding new methods and overriding existing ones. In this paper, the authors propose an approach to represent transformations as objects in the OCaml functional programming language. Every alternative in data type definition has a corresponding object method. This design allows the construction of many distinct transformations. The cases where too many methods are not desirable are also discussed. The method is applicable to represent extensible transformations for polymorphic variant data types in OCaml when other methods fail to do it. The method is not bound to any particular domain. It allows the creation of extensible transformations in OCaml. It could be ported to other functional languages supporting object-oriented programming.
The paper considers the problem of evaluating frequency of the processes whose mathematical model is stochastic processes consisting of a series of sequential episodes with a known class of distributions of the length of the time interval between them. In the previously proposed approach, the input data included information about the value of the interval between the last episode and the end of the study period, which could lead to inaccurate results. This interval differs from the intervals between successive episodes, and hence its presentation and processing require approaches that take this feature into account. Accuracy of the estimation results for process frequency was improved by developing a new model based on the Bayesian confidence network that includes nodes corresponding to the intervals between the last episodes of the process, the minimum and maximum intervals between episodes, by correctly accounting for the values of the interval between the last episode and the end of the study period at the model training stage. The authors propose a Bayesian belief network that includes a random element characterizing the interval between the end of the study period and the last episode of the process during the study period; data on this interval can be available at the training stage. They used R programming and the bnlearn package to model the Bayesian belief network. A new approach to the estimation of process frequency based on the Bayesian belief network generated by machine learning methods is proposed. It allows increasing the accuracy of the results by correctly considering the value of the interval between the last episode and the end of the period under study using a special scheme in the machine learning Bayesian belief network which includes a “hypothetical” episode after the end of the study period. To test the proposed approach, data was collected on 5608 Instagram users, which included the time of posting for the year 2020 and the time of publishing the first post for the year 2021. 70 % of the sample was used to train the model, and 30 % was used to compare the posting frequency values predicted by the model with known values. The results can be used in various fields of science, where it is necessary to estimate a process frequency under information deficit, when the whole process is observed for no more than some limited time. Obtaining such estimates is often an important issue in medicine, epidemiology, sociology, etc. The approach shows good results on the comparison of the theoretical model and the results of learning from the social network data, which can automate the obtaining of process frequency estimates.
 Advances in the domain of software-based technology pave the way for widespread use of object-oriented programs. There is a need to develop a well-established software system that will reduce maintenance costs and enhance the usability of the component. While designing a software system, its internal structure deteriorates due to prolonged or delayed maintenance activities. In such situations, restructuring of the software is a superior approach to improve the structure without changing external behaviour of the system. One approach to carry out restructuring is to use refactoring on the existing source code for improving the internal structure of the code. Code refactoring is an effective technique for software development that improves the software’s internal structure without changing its external behaviour. The purpose of refactoring is to improve the cohesion of existing code and minimize coupling in the existing module of a software system. Among numerous methods, clustering is one of the effective approaches to increase the cohesion of the system. Hence in this paper, the authors suggest to extract member functions and member variables and propose to find their similarity by Frequent Usage Pattern approach. Next, the proposed fuzzy based clustering algorithm perform effective code refactoring. The proposed method utilizes multiple refactoring methods to increase the cohesion of the component without any change in the meaning of the software system. The proposed system will offer automated support to change low-cohesive to high-cohesive functions. Finally, the proposed model has been experimentally tested with object-oriented programs.
 A modern electric power system is a complex organizational structure that coordinates its intelligent components through the definition of roles, communication channels and powers. The management system of intelligent components of the electric power system should ensure the consistency of their work at technological stages of generation, transport, distribution and consumption of electric energy, while achieving the targets and reducing the value of resource consumption. The disadvantage of the process management system that is currently used in electric power systems is that the hierarchical management structure is applied to the network topology. Thus, there is a conflict of resources and processes of generation, transport, distribution and consumption of electricity. The authors propose a concept of a distributed resource and process management system in electric power systems using digital twin technology. The electrical power system is modeled as a polystructural one. The concepts of the system of polystructure indicators, the metric system of polystructure, the body of polystructure are used. Representation of electric power system components and technological processes of generation, transport, distribution and consumption by means of digital twin technology makes it possible to exclude conflicts of resources and processes in the electric power system while maintaining the requirements for reliability and safety of the system. Digital twin technology, as applied to polystructured systems, provides the developers of distributed management systems with a methodology for creating a modern management system, where the production of management decisions does not lead to conflicts between the components of the power system. The proposed distributed management system is built as a polystructure, the body of which ensures the consistency of technological processes, equipment resources and electricity consumption.
The paper deals with the problem of unauthorized use in deep learning of facial images from social networks and analyses methods of protecting such images from their use and recognition based on de-identification procedures and the newest of them — the “Fawkes” procedure. The proposed solution uses a comparative analysis of images subjected to the Fawkes-transformation procedure, representation and description of textural changes and features of structural damage in facial images. Multilevel parametric estimates of these damages were applied for their formal and numerical assessment. The reasons for the impossibility of using images of faces destroyed by the Fawkes procedure in deep learning tasks are explained. It has been theoretically proven and experimentally shown that facial images subjected to the Fawkes procedure are well recognized outside of deep learning methods. It is argued that the use of simple preprocessing methods for facial images (subjected to the Fawkes procedure) at the entrance to convolutional neural networks can lead to their recognition with high efficiency, which destroys the myth about the importance of protecting facial images with the Fawkes-procedure.


Diagnostic issues receive a lot of attention in the design of information processing and control systems since the systems’ reliability and fault tolerance depends on the quality of their solution. The article presents the results of the development of a synthesis algorithm for a model designed to solve the problem of test diagnostics and focused on distributed computing systems. The algorithm is integrated in the system and executed in parallel with the main software of the system, which makes it possible to simplify the process of testing the system. The description of a distributed computing system, complemented by an integrated diagnostic model, is a redundant model of the system. The proposed algorithm implies a reduced amount of diagnostic information. The diagnostic model has a hierarchical structure and involves two stages. At the first stage, the algorithm calculates the set of paths that make up the coverage of its edges for the graph of intermodular connections in the system. It matches a chain of dynamic links with each of the obtained paths, the number of the links being equal to the number of software modules through which this path passes. At the second stage, the type of dynamic links is determined. It is taken into account that the desired dynamic model of the system is used to generate tests. The test design procedure is simplified if the system model is linear, controllable, and observable. Based on this, the requirements for the links of the chains of the model are formulated. The proposed algorithm makes it possible to obtain a discrete-event model for the system characterized by a reduced amount of used diagnostic information.
he Sentaurus TCAD software package is widely used in the modeling of semiconductor optoelectronic devices. The main part of simulating solar elements is creating a correct geometric model. A geometric model can be built using the SDE module in two different ways, i.e. by writing code or using standard shapes in a graphical environment. Creating complex structures using simple shapes is time-consuming and labour intensive. Therefore, this paper provides data on how to develop algorithms using geometric models of complex structural solar cells. A universal algorithm has been developed for creating a geometric model of solar cells with a sinusoidal p-n junction and a rear multiple structure. Using these algorithms, it is possible to create geometric models of various solar cells from simple to complex structures. By applying this algorithm, the authors studied the dependence of photoelectric parameters of the p-n and n-p junction silicon solar cells on their thickness in order to find the optimum thickness for both structures. It has been found, that the optimum thickness was equal to 256 μm for p-n junction and 75 μm for n-p junction silicon solar cell. The maximum efficiency of p-n junction silicon solar cell is 1.4 times greater than that of n-p junction solar cell in their optimum thickness.
In series of numerical experiments involving the Shu-Osher problem of nonlinear acoustics and the Woodward-Colella problem of two interacted blast waves, the author studied the computational properties of a new algorithm for the hybrid large-particle method. The numerical method is a two-step predictor-corrector type in time. Spatial derivatives are split by physical processes. At the first stage of splitting, the gradient and deformation terms of the conservation laws are taken into account, and at the second stage, convective flows are taken into account. The proposed balanced algorithm of the method includes a more dissipative upwind reconstruction of fluxes at the “predictor” step and a centered (non-dissipative on smooth solutions) approximation at the correction step: CDP2-UC (Customizable Dissipative Properties — Upwind-Centered). For a more flexible control of the numerical viscosity, a nonlinear correction of the scheme based on a parametric combination of known limiters is implemented. The numerical scheme has a second-order approximation in space and time on smooth solutions. The balanced algorithm of the hybrid large-particle method demonstrated a monotonic solution with a qualitative resolution of the details of the gas flow in the entire domain of determining the test problems. No spurious oscillations occurred during the process of fining the mesh, and convergence to the reference density profile was observed. The influence of a limiter on the numerical dissipation of the CDP2-UC scheme is analyzed. The results present the comparison with the following variants of the schemes: MUSCL (Monotone Upstream Scheme for Conservation Laws), MUSCL-CABARET with a NOLD limiter (Non-Oscillatory Low-Dissipative), the discontinuous Galerkin method with various forms of nonlinear correction, the hybrid weighted nonlinear scheme of the fourth order of approximation (CCSSR-HW4) and the popular WENO5 scheme (Weighted Essentially Non-Oscillatory Scheme) with fifth order of accuracy. The proposed algorithm successfully competes with modern numerical methods that have a formally higher (fourth and fifth) order of approximation. The hybrid large-particle method has the simplicity, uniformity, and cost-effectiveness of the algorithm, as well as high resolution. The test calculations allowed the author to estimate the range of parametric control of the numerical dissipation of the method for correct numerical modeling of the applied problems with nonlinear wave fields and strong shock waves.


This paper presents the architecture of a system for full-text search by speech data based on a global search index that combines information about all speech recordings in the archive. The architecture includes two independent blocks: an indexing block, and a block for building and performing a search query. In order to process speech recordings, it uses an automatic speech recognition system (ASR) with a linguistic decoder based on weighted finite-state transducers framework (WFST), which generates word lattices. Lattices are sequentially converted to confusion networks and inverse indexes. It allows taking into account all the word hypotheses generated during decoding. The proposed solution expands the applicability of speech analytics systems for those cases when the word error rate is high, such as the processing of speech recordings collected under difficult acoustic conditions or in low-resource languages.
Assessment of cerebral circulation through an intact skull using imaging photoplethysmography
Volynsky Maxim Alexandrovich, Alexey Y. Sokolov, Margaryants Nikita B., Anastasiia V. Osipchuk, Valeriy V. Zaytsev , Oleg V. Mamontov, Alexey A. Kamshilin
The feasibility of assessing the parameters of cerebral hemodynamics without skull trepanation using imaging photoplethysmography with illumination in the near-infrared spectrum range was demonstrated for the first time. The results were obtained when studying changes in blood flow in the rat brain in response to short-term respiratory failure (apnea test). Translation of the results into clinical practice will be useful for the development of a method for non-invasive assessment of cerebral blood flow in patients with cerebrovascular diseases.
Copyright 2001-2021 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.