Menu
Publications
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
Editor-in-Chief
Nikiforov
Vladimir O.
D.Sc., Prof.
Partners
Summaries of the Issue
PHOTONICS AND OPTOINFORMATICS
Investigation of congruent lithium niobate crystal dispersion properties in the terahertz frequency range
Vladimir S. Shumigai, Egor N. Oparin, Aleksandra O. Nabilkova, Melnik Maxim V, Tsypkin Anton N., Kozlov Sergei A 635
Dispersion curves of the refraction index of a congruent lithium niobate (cLN) crystal cut perpendicular to the x and z axes in the terahertz frequency range are considered. In the study, the method of terahertz time domain spectroscopy with time resolution is used passed through an initially isotropic detecting crystal which becomes birefringent when exposed to a terahertz field. The magnitude of the induced birefringence is proportional to the amplitude of the terahertz field. Using Fourier analysis of a terahertz pulse passing through a cLN crystal and a reference pulse that does not interact with the object, the frequency dependences of the refractive index and the absorption coefficient of the object under study are constructed. Dispersion curves are presented for the real part of the refractive index of a cLN crystal cut along the planes (100) and (001), in the frequency range 0.25–1.25 THz. Simulation of the propagation of a one-and-a-half-cycle pulse in media with dispersion is performed based on the data of scientific papers by other authors. As a result, the temporal forms of the output signals are found. Conclusion about the inaccuracy of the dispersion curves from the selected works is made. The parameters has been identified whose optimization made it possible to eliminate inaccuracies in the display of the dispersion dependence for the high-frequency region of the terahertz spectrum were identified. The results obtained are very important for the design of devices based on nonlinear optical effects. These data will be useful for the generation of difference frequencies, optical rectification and generation of terahertz radiation as well as for areas where accurate data on the terahertz dispersion properties of nonlinear crystals, including cLN, are required.
OPTICAL ENGINEERING
Polarization extinction ratio in polarization maintaining fiber sealed with glass solder
Evgeniy E. Kalugin, Mukhtubayev Azamat B. , Meshkovsky Igor K. 643
The paper considers the effect of sealing with glass solder a pair of optical polarization maintaining fibers with an elliptical stress cladding on the value of the polarization extinction ratio. A variant with the placement of non-working fibers in the sealing area to create the symmetry of induced mechanical stresses is proposed. An experimental study of the contribution of the induced mechanical stresses on the value of the polarization extinction ratio has been performed. The assessment of temperature effect on the coefficient of polarization extinction in the place of sealing of optical polarization maintaining fibers has been made. Sealing of fibers in a metal tube was performed using a glass solder preform and an induction heater. Evaluation of the polarization extinction ratio was obtained by white-light interferometry using the scanning Michelson interferometer. The value of the polarization extinction ratio was measured on 4 samples with a length of working fibers of 4 meters. This experiment shows that creation of isometry structure in the area of sealing by addition of non-working fibers allows decreasing the polarization extinction ratio from 0.082 dB/K to 0.035 dB/K in the temperature range from –15 °C to +70 °C. This method allows the sealing of several fibers in one tube to reduce the size of the devices. The performed research can be useful in the development of optoelectronic devices where it is required to introduce optical birefringent fibers in a sealed housing.
Method for remote control of radiation parameters of spacecraft based on X-ray fluorescence analysis
Lyudmila A. Lukyanova, Svitnev Igor V. , Elena A. Kharitonova , Ilya E. Gavrilov 650
Existing international legal acts, instruments and procedures do not guarantee equal conditions for the exploration and use of outer space. There is a need for means of objective control of spacecraft carrying products with fissile materials. Inspection of such objects can be carried out by X-ray fluorescence methods. However, in the subject area under consideration, the use of such methods has been little studied. In this paper, a method is proposed for obtaining the spectra of X-ray fluorescence radiation of materials, the object under study based on the calculation of the spatial and energy characteristics of X-ray radiation. The X-ray fluorescence spectra of objects are obtained on the basis of the calculation of the spatial and energy characteristics of X-ray radiation according to the original geometric (mathematical) and simulation models developed by the authors. The calculations take into account the complex layered system of the object, taking into account the proportions of high-energy fluorescent radiation of the overlying layers. An original numerical experiment is proposed using a program that allows one to choose the projections of an object subjected to X-ray irradiation, the wavelength and intensity of the emitter parameters. Using the obtained spatial-energy distribution of radiation quanta and the physical properties of the radiation transmission medium, the problem of finding the coordinates and angles of intersection of tracks of quanta beams in each area of the object is solved. The result of software processing is displayed as the resulting spectrum. The obtained spectrum makes it possible to draw a conclusion about the chemical composition of the materials of the inspected object. The X-ray beam reaching the object is modeled as a spot with an area commensurate with the cross section of the device in the form of a selected geometric primitive — a square. The spot area of the incident photons is calculated from a predetermined divergence angle. On the basis of open literary sources, a physical model of an object with nuclear fissile materials W88 (USA) was chosen. The following characteristics of X-ray beams (subbeams) are accepted: a coherent beam of photons with a wavelength of 0.005 nm; beam scattering angle of 1 degree; emitter detector area of 4 m2. The resulting X-ray fluorescence spectrum was obtained which gives an idea of the chemical composition of the units and blocks of the apparatus and the object inside. The presence of specific products on the object board is confirmed by characteristic lines with normalized wavelengths indicating the presence of chemical elements belonging to the radioactive series. The results obtained can be used in the development of hardware and software for spacecraft devices that monitor the presence of fissile materials on board of the inspected vehicle.
Fiber-optic amplitude bend direction and magnitude sensor
Andrey A. Dmitriev , Kirill V. Grebnev, Daniil S. Smirnov, Varzhel Sergey V. 659
A variant of the implementation of a fiber-optic sensor for the direction and magnitude of the bend is proposed. Unlike existing spectral measuring systems, the solution under consideration involves the use of an amplitude polling technique which makes it possible to increase the speed of the sensor when using simpler and more affordable components. A sensitive element based on special diffraction structures consisting of pairs of chirped fiber Bragg gratings has been studied. The sensing elements are mounted on a tooling — a steel rod subjected to bending. The ability of the sensor to determine the magnitude and direction of bending in the deviation range from 0 to 30 mm was demonstrated with a standard deviation of the measured values from the real values of 0.536 mm. This measurement result is achieved by processing data obtained from three measuring devices and by the neural network with a hidden layer of 10 neurons and the sigmoid as the activation function. The research results are essential for modern monitoring systems. The implementation of the direction and magnitude of the bend sensor in the format of a fiber-optic device allows you to overcome the limitations of piezoelectric sensors, due to high noise immunity and resistance to environmental influences. The proposed technological solution makes it possible to avoid the spectral measurement technique that has become widely used in fiber-optic sensor systems. The use of an amplitude sensor for the magnitude and direction of bending will allow its use in devices where there is a need for precise positioning of control elements or structural components subjected to bending. Also, due to the measurement of the desired bending effect by estimating the optical power of the signal, the design of the sensor does not require the presence of a complex measuring device, and the sensor’s performance can be ensured using a cascade of inexpensive, but at the same time high-speed and durable photodetectors.
AUTOMATIC CONTROL AND ROBOTICS
Compensation of external disturbances for MIMO systems with control delay
Nguyen Khac Tung, Vlasov Sergey M., Pyrkin Anton Alexandrovich, Aleksandra V. Skobeleva 666
The problem of compensation of external disturbing influences for MIMO system with input delay is important and relevant. A solution to this problem is proposed in the problems of dynamic objects control and in a number of others. The proposed method is based on the principle of an internal model and requires the identification of perturbation parameters. At the first stage, a scheme for extracting a disturbance is presented which is represented as a sinusoidal signal with an unknown frequency, amplitude, and phase. At the second stage, the problem of identifying the frequencies of a sinusoidal and multisinusoidal signal is solved. In the last stage, an algorithm for stabilizing the state of the object to zero is developed using feedback. A new scheme for compensating external disturbances for a MIMO system with an input delay is proposed. A new algorithm for identifying the frequencies of a multisinusoidal signal is proposed. The analysis of the possibilities of the proposed estimation method using computer simulation in the MATLAB Simulink environment is carried out. The developed method can be effectively applied to a wide class of applied tasks related to the control of robots and robotic manipulators for various purposes.
COMPUTER SCIENCE
Building cryptographic schemes based on elliptic curves over rational numbers
Davydov Vadim Valerievich, Jean-Michelle N. Dakuo, Ivan D. Ioganson , Altana F. Khutsaeva 674
The possibility of using elliptic curves over the rational field of non-zero ranks in cryptographic schemes is studied. For the first time, the construction of cryptosystems is proposed the security of which is based on the complexity of solving the knapsack problem on elliptic curves over rational numbers of non-zero ranks. A new approach to the use of elliptic curves for cryptographic schemes is proposed. A few experiments have been carried out to estimate the heights characteristic of points on elliptic curves of infinite order. A model of a cryptosystem resistant to computations on a quantum computer and based on rational points of an infinite order curve is proposed. A study of the security and effectiveness of the proposed scheme has been carried out. An attack on the secret search in such a cryptosystem is implemented and it is shown that the complexity of the attack is exponential. The proposed solution can be applied in the construction of real cryptographic schemes as well as cryptographic protocols.
An algorithm for generating design solutions for data and design-production procedures management at the stages of the lifecycle of an electronic product
Donetskaya Julia V. 681
The integration of automated systems at enterprises provides information support for the stages of the product life cycle and electronic interaction between employees in the process of performing work. This means that performing design and production procedures employees of enterprises solve various design problems. The tasks are related to the analysis of a large amount of information about the product presented in the form of ontology. This requires the development of an algorithm to extract information from the ontology based on given requirements. The developed algorithm consists of several stages. At the first stage, a search space for design solutions is formed. At the second stage, for each variant of the design solution the values of the objective function are calculated, and the best design solution is selected. The best solution is the one for which the condition of minimizing the value of the objective function is satisfied. The third stage is associated with the choice of design solutions that are close to the found best solution. The best solution is determined by the computed Hamming distance. The fourth and fifth stages are characterized by the analysis of the elements of the set of options for design solutions and the formation of the desired design solution Sequences of actions performed at the stages of the algorithm for generating design solutions are proposed. The proposed algorithm can be implemented at enterprises to provide a procedure for solving design problems. The presented algorithm allows the development of signatures and semantics of unified services for the use of a digital passport.
Karin S.A., Karin A.I.
A method for improving the efficiency of integrated processing of Earth remote sensing data in solving problems of spatial objects monitoring
Sergey A. Karin, Alexandr I. Karin 691
A method is proposed for improving the efficiency of the system for complex processing of data obtained by remote sensing of the Earth in conditions of limited resources in solving problems of spatial objects monitoring. The efficiency of the system functioning is increased through the rational allocating its resources according to the tasks to be solved, taking into account the priority solution of those that have a higher value of relative importance coefficient. It is also proposed to improve the efficiency of solving each of the tasks on the basis of exclusion from the work plans for solving those resources that are overloaded with other tasks, but at the same time make an insignificant contribution to the formation of an integral result. The simulation results show that the use of the proposed method for improving the efficiency of the system functioning for complex processing of Earth remote sensing data in conditions of limited resources, when solving problems of monitoring spatial objects, makes it possible to ensure the required quality of managerial decisions, especially in conditions of high dynamism of changing the monitoring objects characteristics.
Development of a model for detecting network traffic anomalies in distributed wireless ad hoc networks
Leonid V. Legashev, Lubov S. Grishina, Denis I. Parfenov, Arthur Yu. Zhigalov 699
Mobile ad hoc networks are one of the promising directions of the edge computing technology and they are used in various applications, in particular, in the development of intelligent transport systems. A feature of mobile ad hoc networks lies in the constantly changing dynamic network topology, as a result of which it is necessary to use reactive routing protocols when transmitting packets between nodes. Mobile ad hoc networks are vulnerable to cyber-attacks, so there is a need to develop measures to identify network threats and develop rules for responding to them based on machine learning models. The subject of this study is the development of a dynamic model for detecting network traffic anomalies in wireless distributed ad hoc networks. Within the framework of this study, methods and algorithms of data mining and machine learning were applied. The proposed approach to traffic monitoring in wireless distributed ad hoc networks consists in the implementation of two stages: initial traffic analysis to identify anomalous events and subsequent in-depth study of cybersecurity incidents to classify the type of attack. Within the framework of this approach, the corresponding models are constructed based on ensemble methods of machine learning. A comparative analysis and selection of the most efficient machine learning algorithms and their optimal hyperparameters has been carried out. In this paper, a formalization of the traffic anomaly detection model in distributed wireless ad hoc networks is carried out, the main quantitative metrics of network performance are identified, a generalized algorithm for detecting traffic anomalies in mobile ad hoc networks is presented, and an experimental study of the network segment simulation is carried out from the point of view of performance degradation during the implementation of various network attack scenarios. Network distributed denial of service attacks and cooperative blackhole attacks have the greatest negative impact on the performance of the mobile ad hoc network segment. In addition, the network simulation results were used to build a machine learning model to detect anomalies and classify types of attacks. The results of a comparative analysis of machine learning algorithms showed that the use of the LightGBM method is the most effective for detecting network traffic anomalies with an accuracy of 91 %, and for determining directly the type of attack being carried out with an accuracy of 90 %. The proposed approach for network anomalies detection through the use of trained traffic analysis models makes it possible to identify the considered types of attacks in due time. The future development direction of this research is the consideration of new scenarios for the emergence of network attacks and online additional training of the constructed identification models. The developed software tool for detecting network traffic anomalies in distributed mobile ad hoc networks can be used for any type of wireless ad hoc networks.
Applying the FN-corrector to improve the quality of audio event classification
Alexander M. Golubkov, Evgeniy V. Shuranov 708
The paper deals with the problem of acoustic events classification which is actively applied to the problems of a safe city, smart home, IoT devices, and for the detection of industrial accidence. A solution to improve the accuracy of classifiers without changing their structure and collecting additional data is proposed. The main data source for the experiments was the TUT Urban Acoustic Scenes 2018, Development Dataset. The paper presents the way to increase the accuracy of audio event classification by using the FN-corrector. The FN-corrector is a linear two-stage classifier performing the transformation of the feature space into a linearly separable space and the linear separation of one class from another. If a corrector is applied, the responses of the original classifier generate four classes: positive (P), negative (N), false positive (FP), and false negative (FN). As a result, it becomes possible to train two types of correctors: the FP-corrector separating positive and false positive classifier responses, and the FN-corrector separating negative and false negative classifier responses. In the experiments, the VGGish convolutional neural network was used as the initial classifier. The audio signal is converted into a spectrogram and is fed to the input of the neural network which forms the spectrogram feature description and performs a classification. As an example, two ”confused“ classes are selected to demonstrate the increase in classification accuracy. Using the feature description of audio recordings of these classes, an FN-corrector was built, trained and connected to the original classifier. The response from the classifier, as well as the feature description, has been passed to the corrector input. Next, the corrector translated the feature space into a new basis (into a linearly separable space) and classified the classifier answer responding to the question whether the original classifier makes a mistake on such a feature vector or not. If the original classifier made a mistake, then his answer is changed by the corrector to the opposite, otherwise the answer remains the same. The results of the experiments demonstrated a decrease in the level of class confusion and, accordingly, an increase in the accuracy of the original classifier without changing its structure and without collecting an additional data set. The results obtained can be used on IoT devices that have significant limitations on the size of the models used, as well as in solving the problems of domain adaptation which is relevant in audio analytics
Strengthening the role of microarchitectural stages of embedded systems design
Maxim V. Kolchurin, Pinkevich Vasiliy Yu., Platunov Alexey E 716
The growing variety of computing systems, the rapid increase in their complexity, their integration into objects and processes of the physical world require a dramatic increase in the productivity of their creators. It is noted that the quality, timing, and degree of reuse of design results in the field of information technologies strongly depend on design methodologies and routes at the stages of choosing and/or creating stacks of platforms, technologies and tools. The most important role belongs to the ways of describing the organization of the computing system at various levels and to the used systems of abstractions. The problem of filling the semantic gap between the conceptual (architectural) level and the implementation levels is still very acute. So, it requires the creation of industrial techniques and design tools at these “intermediate” levels. The paper suggests ways of presenting design solutions that are aimed at a holistic, end-to-end description of both the logic of the computing process organization and the steps, technologies, and tools of the design process. The content and the necessity of the stages of microarchitectural design of computing systems is justified and explained in detail. Classification of projects in the field of information technologies according to the degree of variability of the project platform is introduced. Several concepts representing a set of abstractions for microarchitectural design within projects with great variability are suggested. The following abstractions are described in detail: project, design and aspect spaces, project platforms and cross-level mechanisms. Examples of several proposed abstractions presentations (design documentation tools) of microarchitectural design stages are discussed that are most relevant in the design of computing systems in the “limited resources” model: embedded systems, cyber-physical systems, “edge” and “fog” levels of Internet of Things systems.
A multivariate binary decision tree classifier based on shallow neural network
Avazjon R. Marakhimov, Jabbarbergen K. Kudaybergenov, Kabul K. Khudaybergenov, Ulugbek R. Ohundadaev 725
In this paper, a novel decision tree classifier based on shallow neural networks with piecewise and nonlinear transformation activation functions are presented. A shallow neural network is recursively employed into linear and non-linear multivariate binary decision tree methods which generates splitting nodes and classifier nodes. Firstly, a linear multivariate binary decision tree with a shallow neural network is proposed which employs a rectified linear unit function. Secondly, there is presented a new activation function with non-linear property which has good generalization ability in learning process of neural networks. The presented method shows high generalization ability for linear and non-linear multivariate binary decision tree models which are called a Neural Network Decision Tree (NNDT). The proposed models with high generalization ability ensure the classification accuracy and performance. A novel split criterion of generating the nodes which focuses more on majority objects of classes on the current node is presented and employed in the new NNDT models. Furthermore, a shallow neural network based NNDT models are converted into a hyperplane based linear and non-linear multivariate decision trees which has high speed in the processing classification decisions. Numerical experiments on publicly available datasets have showed that the presented NNDT methods outperform the existing decision tree algorithms and other classifier methods.
Improvement and comparison the performance of fuzzing testing algorithms for applications in Google Thread Sanitizer
Oleg V. Doronin 734
It is difficult to imagine modern information systems without the use of multithreading. The use of multithreading can both improve the performance of the system as in whole so as slow down the execution of multithreaded applications due to the occurrence of multithreaded programming errors. To find such errors in C/C++ languages, there exists a Google Thread Sanitizer compiler module. The order of execution of threads can change every time the program is started for execution and can affect the appearance of such errors. To repeatedly change the order of execution of threads during the execution of the program, Google Thread Sanitizer has a fuzzing testing module that allows you to increase the probability of finding errors. But all the thread scheduling algorithms in this module are presented in the form of sequential execution of threads which can lead to a significant slowdown in Google Thread Sanitizer as well as can affect the testing of applications that depends on timers (waiting for network events, deadline for operations, ...). To speed up the work of fuzzing schedulers, a method for parallelizing independent transitions is proposed. From the point of view of multithreaded programming errors, it is only important to change the shared state between threads, and local calculations do not affect the reproduction of multithreaded errors. The changes of shared states themselves occur at synchronization points (places in the code where threads are switched according to the principle of cooperative multitasking). The method suggests ordering only the change of shared states at synchronization points, and performing local calculations in parallel, due to which parallelization is achieved. For the analysis of theoretical complexity of the algorithm, the method of combinatorial counting is used. A new approach to the organization of fuzzing testing based on the method of parallelization of independent transitions is proposed the implementation of which, according to theoretical and practical estimates, shows a noticeable acceleration of the work of fuzzing schedulers. According to the results of the experiment, it was revealed that for the algorithm of iterating through all execution variants, the acceleration of execution reaches 1.25 times for two threads. For an arbitrary number of threads, an estimate is presented in the form of a formula. The proposed approach allows fuzzing tests to cover multithreaded applications for which execution time is important — applications with reference to timers which improve the quality of the software.
A method for protecting neural networks from computer backdoor attacks based on the trigger identification
Menisov Artem B., Lomako Aleksandr G. , Andrey S. Dudkin 742
Modern technologies for the development and operation of neural networks are vulnerable to computer attacks with the introduction of software backdoors. Program backdoors can remain hidden indefinitely until activated by input of modified data containing triggers. These backdoors pose a direct threat to the security of information for all components of the artificial intelligence system. Such influences of intruders lead to a deterioration in the quality or complete cessation of the functioning of artificial intelligence systems. This paper proposes an original method for protecting neural networks, the essence of which is to create a database of ranked synthesized backdoor’s triggers of the target class of backdoor attacks. The proposed method for protecting neural networks is implemented through a sequence of protective actions: detecting a backdoor, identifying a trigger, and neutralizing a backdoor. Based on the proposed method, software and algorithmic support for testing neural networks has been developed that allows you to identify and neutralize computer backdoor attacks. Experimental studies have been carried out on various dataset-trained convolutional neural network architectures for objects such as aerial photographs (DOTA), handwritten digits (MNIST), and photographs of human faces (LFW). The decrease in the effectiveness of backdoor attacks (no more than 3 %) and small losses in the quality of the functioning of neural networks (by 8–10 % of the quality of the functioning of a neural network without a backfill) showed the success of the developed method. The use of the developed method for protecting neural networks allows information security specialists to purposefully counteract computer backdoor attacks on artificial intelligence systems and develop automated information protection tools.
Software development system for creation adaptive user interfaces
Tagirova Liliya F., Andrey V. Subbotin, Zubkova Tatiana M 751
To improve the efficiency of the design engineer, the use of design automation systems is required. Currently, computeraided design tools are multifunctional and have an expanded user interface. Depending on the scope of the task to be solved and the level of training, the design engineer does not need all the means of computer-aided design systems. In this case, an adaptive interface can serve as a means of increasing labor productivity, which can be customized for a particular user, taking into account his experience and physiological features (system experience, computer literacy, experience working with such programs, typing, color blindness, memory, hand motility). The characteristics by which a user system is evaluated have different degrees of uncertainty, ambiguity, and internal inconsistency. These characteristics are difficult to formalize and they are very specific. To perform the evaluation, it is advisable to use intelligent systems based on fuzzy logic and fuzzy sets. The most acceptable in this case is the Mamdani method which uses a minimax composition of fuzzy sets. The proposed mechanism includes a sequence of actions: fuzzification, fuzzy inference, composition, defazzification. A software development system has been developed that allows you to form an interface part of the software taking into account the capabilities of a particular user. The implementation of the developed software system allows you to select a set of elements individually for each design engineer and form an adaptive prototype of the application program interface. In this case, it becomes possible to improve the interaction between a person and a computer, make it more comfortable, reduce the time to search for the necessary functions and the number of erroneous actions, and improve the quality of the work done.
A method of detecting information security incidents based on anomalies in the user’s biometric behavioral characteristics
Esipov Dmitry A. , Nargiz Aslanova, Egor E. Shabala, Daniil S. Shchetinin, Popov Ilya Yu. 760
Nowadays a significant amount of attacks on information systems are multi-stage attacks. In many cases the key subjects of attacks are insiders. The actions of an insider differ from the activity of a legitimate user, so it is possible for the latter to form a model of his behavior. Then the differences from the specified model can be classified as information security events or incidents. Existing approaches to anomaly detection in user activity use separate characteristics of user behavior, without taking into account their interdependencies and dependencies on various factors. The task of the study is to form a comprehensive characteristic of the user`s behavior when using a computer — a “digital pattern” for detecting information security events and incidents. The essence of the method is in the formation of a digital pattern of the user’s activity by analyzing his behavioral characteristics and their dependencies selected as predictors. The developed method involves the formation of a model through unsupervised machine learning. The following algorithms were considered: one-class support vector machine, isolating forest and elliptic envelope. The Matthews correlation coefficient was chosen as the main metric for the quality of the models, but other indicators were also taken into consideration. According to the selected quality metrics, a comparative analysis of algorithms with different parameters was conducted. An experiment was carried out to evaluate the developed method and compare its effectiveness with the closest analogue. Real data on the behavior of 138 users was used to train and evaluate models within the studied methods. According to the results of the comparative analysis, the proposed method showed great performance for all the considered metrics, including an increase in the Matthews correlation coefficient by 0.6125 compared to the anomaly detection method by keystroke dynamics. The proposed method can be used for continuous user authentication from unauthorized access and identifying information security incidents related to the actions of insiders.
Light weight recommendation system for social networking analysis using a hybrid BERT-SVM classifier algorithm
Nallichery Subramanian Kiruthika, Ganapathy Thailambal 769
Social media platforms, such as Twitter, Instagram, and Facebook, have facilitated mass communication and connection. Due to the development as well as the advancement of social platforms, the spreading of fake news has increased. Many studies have been performed for detecting fake news with machine learning algorithms; but these existing methods had several difficulties, such as rapid propagation, access method and insignificant selection of features, and low accuracy of the text classification. Therefore, to overcome these issues, this paper proposed a hybrid Bidirectional Encoder Representations from Transformers — Support Vector Machine (BERT-SVM) model with a recommendation system that used to predict whether the information is fake or real. The proposed model consists of three phases: preprocessing, feature selection and classification. The dataset is gathered from Twitter social media related to COVID-19 real-time data. Preprocessing stage comprises Splitting, Stop word removal, Lemmatization and Spell correction. Term Frequency Inverse Document Frequency (TF-IDF) converter is utilized to extract the features and convert text to binary vectors. A hybrid BERT-SVM classification model is used to predict the data. Finally, the predicted data is compared with the preprocessed data. The proposed model is implemented in MATLAB software with several performance metrics carried out, and these parameters attained better performance: accuracy is 98 %, the error is 2 %, precision is 99 %, specificity is 99 %, and sensitivity is 98 %. Therefore the better effectiveness of the proposed model than existing approaches is shown. The proposed social networking analysis model provides effective fake news prediction that can be used to identify the Twitter comments, either real or fake.
MODELING AND SIMULATION
Modeling of random processes based on Karhunen-Loeve decomposition
Alexandr S. Efimov 779
The problem of digital modeling of random processes with given either correlation function or spectral density of the process is considered. These functions of a random process are interconnected by the Wiener–Khinchin theorem. The solution of one function can be used to solve another. The development of a mathematical representation of a stationary random process with a given correlation function based on the Karhunen-Loeve transformation, which is most often used to decorrelate the original process in order to describe it more concisely (data compression problem), has been completed. It is proposed to use the Karhunen–Loev transformation to impart the required correlation properties to the original uncorrelated random process by inverting (converting) this transformation. The form of the required transformation for a discrete (in time) representation of input and output processes of various lengths and methods for ensuring the required modeling accuracy are substantiated. A procedure for obtaining a correlation function from a given spectral density of a simulated random process is presented. An experimental study of the proposed method was carried out in the course of computer simulation in the Mathcad package which simplified the solution of the required computational problems. The initial random process was obtained as a sequence of independent (and, therefore, uncorrelated) random numbers, and the output process, as a result of the transformation, was obtained in the work. The calculated approximate correlation function is compared with the given one and the error variance is determined. The results of modeling random processes with given correlation functions and a homogeneous Markov process with a given transition probability are given as well as an example of the transition from a given spectral density of a random process to its correlation function. The results obtained confirm the effectiveness and feasibility of the developed modeling methods which will allow them to be used in computer research and design of various systems.
Numerical dissipation control of a hybrid large-particle method in vortex instability problems
Sadin Dmitry V. 785
Current trends in the development of numerical schemes are associated with a decrease in dissipative and dispersion errors, as well as an improvement in the grid convergence of the solution. Achieving the computational properties is not an easy problem since a decrease in scheme viscosity is often associated with an increase in the oscillations of gas dynamic parameters. The paper presents a study of the issues of numerical dissipation control in gas dynamics problems in order to increase the resolution in numerical reproduction of vortex instability at contact boundaries. To solve this problem, a hybrid large-particle method of the second order of approximation in space and time on smooth solutions is used. The method is constructed with splitting by physical processes in two stages: gradient acceleration and deformation of the finite volume of the medium; convective transfer of the medium through its facets. An increase in the order of approximation in time is achieved by a time correction step. The regularization of the numerical solution of problems at the first stage of the method consists in the nonlinear correction of artificial viscosity which, regardless of the grid resolution, tends to zero in the areas of smoothness of the solution. At the stage of convective transport, the reconstruction of fluxes was carried out by an additive combination of central and upwind approximations. A mechanism for regulating the numerical dissipation of the method based on a new parametric limiter of artificial viscosity is proposed. The optimal adjustment of the method by the ratio of dissipative and dispersive properties of the numerical solution is achieved by setting the parameter of the limiting function. The efficiency of the method was tested on two-dimensional demonstrative problems. In one of them, the contact surfaces are twisted into a spiral on which the Kelvin-Helmholtz vortex instability develops. Another task is the classic problem with a double Mach reflection of a strong shock wave. Comparison with modern numerical schemes has shown that the proposed variant of the hybrid large-particle method has a high competitiveness. For example, in the problem with double Mach reflection, the considered version of the method surpasses in terms of vortex resolution the popular WENO (Weighted Essentially Non-Oscillatory) scheme of the fifth order and is comparable to the numerical solution of WENO of the ninth order of approximation. The proposed method can be the basis of a convective block of a numerical scheme when constructing a computational technology for modeling turbulence.
Numerical model of a pulsed subcritical streamer microwave discharge for problems of plasma ignition of fuel mixtures in the gas phase
Bulat Pavel V, Konstantin N. Volkov, Anzhelika I. Melnikova, Maksim E. Renev 792
An approximate model for estimating plasma heating and conversion caused by a subcritical streamer microwave discharge has been considered and verified. Ignition occurs in an environment with a pressure of 13 kPa, a temperature of 150 K, there is an external air flow at a speed of up to 500 m/s, an initiator antenna and a flat mirror are used to focus electromagnetic radiation, a stoichiometric mixture of propane-air and pure propane is supplied through the cavity in the antenna. Radiation power is 3 kW. The model is three-stage and semi-empirical. The plasma region and its conductivity are set based on experimental statistics; this is a key feature that reduces the consumption of computing resources. The finite element method is used. At the first stage, the Boltzmann equation for the electron gas in the medium is solved in the zero-dimensional formulation for the given parameters of the external electric field. The distribution functions of the electronic energy are obtained as well as the functions of the reaction coefficients. At the second stage, the Helmholtz equations are solved to obtain the distribution of electromagnetic fields near the antenna-initiator, taking into account the given conducting region “plasma”. Based on the obtained distributions of the electric field, the Joule heating powers and the values of the reaction coefficients are calculated. At the third stage, the Navier-Stokes and transfer equations of various types of particles for a compressible medium are solved; taking into account combustion processes for given local heating and plasma reactions. The results are compared with the data of a physical experiment. The distributions of temperature, composition of the medium, velocity of the medium are considered for given local heating power and additional reactions in the plasma region. A stoichiometric mixture of propane with air and pure propane supplied through the antenna are ignited by plasma under the conditions under consideration: the mixture burns in a small area, propane is oxidized in a thin layer of mixing with air. The temperature fields and composition of the medium are compared with photographs of the flame from the experiment. The numerical study shows that under all the conditions considered the model gives results close to the experiment, but there is an overestimation of the power required for ignition by up to two times. The study of the processes of ignition of gaseous mixtures by a subcritical microwave discharge is of interest for the design of propulsion systems with increased reliability, with the possibility of using hardly flammable mixtures. The proposed model gives approximate estimates, while the requirements for resources and time are significantly reduced compared to classical models.
Numerical study on the straight, helical and spiral capillary tube for the CO2 refrigerant
Pravin Jadhav, Anjan Kumar Sahu, Sunita Ballal 804
A numerical study has been carried out for straight, spiral and helical capillary tubes and their performance has been compared with CO2 refrigerant. The numerical models are developed based on the fundamental conservation principles of mass, momentum, and energy. Within this, outer loop, the ordinary differential equations are solved from the inlet to the exit of the capillary tube. The study has been carried out to calculate the mass flow rate by bisection method where the mass is iteratively calculated at the specified capillary length or vice versa. In-house coding programming employs the finite difference approach for numerical solutions. The characterization of the capillary tube has been done by calculating the length for the given mass or by calculating mass for the given length. The comparison of the straight capillary with helical capillary tube (50 mm coil diameter) and spiral capillary tube (50 mm pitch) has been reported. For a change in tube diameter, surface roughness, and length, the percentage reduction in mass flow rate in capillary tubes (straight, helical, and spiral) is calculated. The percentage reduction in mass in a helical capillary tube compared to the straight capillary tube is about 7–9 %. The percentage reduction in mass in a spiral tube compared to the straight capillary tube is nearly 23–26 %. Additionally, the percentage reduction in mass in a spiral tube compared to the helical capillary tube is almost 17–19 %. Additionally, the percentage reduction in length in a spiral tube compared to straight capillary tube ranges from 37 % to 43 %. Similarly, the percentage reduction in length in a spiral tube compared to helical capillary tube is ranging from 25 % to 32 %
BRIEF PAPERS
Information reconstruction from noisy channel using ghost imaging method with spectral multiplexing in visible range
Egor N. Oparin, Vladimir S. Shumigai, Ismagilov Azat O., Tsypkin Anton N. 812
The ghost imaging technique allows us to recover information about an object in conditions of noisy transmission channels, commensurate with the intensity of the speckle structures involved in the reconstruction. One of the main disadvantages of this technique is relatively slow reconstruction speed. This limits its applicability for study of dynamic processes or fast-moving objects. In this paper, we propose a modification of the computational ghost imaging technique that allows us to overcome this limitation. It is shown that the spectral multiplexing of the speckle patterns speeds up the image reconstruction. Increase in the number of spectral channels from 4 to 10 leads to the increase of the signal-tonoise ratio by the factor of 6. Simultaneously, under the same conditions and with the same number of measurements classical monochrome ghost imaging does not reconstruct the picture at all. This makes the proposed technique attractive for high-speed demanding applications such as communications and remote sensing.