Summaries of the Issue


Dynamic range restrictions influence of the fiber-optic towed seismic streamer on the seismogram quality
Alina N. Arzhanenkova, Plotnikov Mikhail Yurievich, Miroshnichenko G.P. , Pavel Yu. Dmitraschenko
In this paper, we presented results of influence of the dynamic range in fiber-optic streamer signal processing circuit on the quality of recorded seismograms. By contrast with the existing hydroacoustic systems, which based on piezoceramic transducers, dynamic range limitations in a fiber-optic towed streamer do not lead to clipping of acoustic signals, but to complex nonlinear distortions that affect both the amplitude and phase frequency characteristics of the recorded signals. Therefore, main task of this work is to assess the distortion of seismograms obtained from acoustic signals recorded under limited dynamic range conditions of fiber-optic towed seismic streamer. To solve this problem, seismic signals obtained during field tests for various types of seismic streamers in the water area of Kola Bay are used. The recorded acoustic signals in the form of digital readings, converted into radians, taking into account the known sensitivity coefficient of hydrophones of the towed seismic streamer, are converted into digital reports of optical interference signals. Interference signals in digital form are equivalent to real optical signals recorded by the receiving path in the signal processing unit of the fiber optic streamer. The digital form of the recorded acoustic signals makes it possible to amplify them by multiplying them by a given gain factor, simulating various energy levels of an acoustic source. Further, these signals served as input data in the mathematical model of the signal processing circuit of the fiber-optic towed streamer, which takes into account the fundamental limitations of the dynamic range due to the finite sampling frequency of the interference signal, the fixed frequency of the auxiliary phase modulation, and the finite bandwidth of the low-pass filters used. Thus, it is possible to simulate the process of recording acoustic signals of a fiber-optic towed seismic streamer both without distortion and under conditions of limited dynamic range. As a result, signals from the output of the processing circuit model are used to construct seismograms of the same shelf area with different levels of acoustic amplification using the reflected wave method. The results demonstrate that the limitations of the dynamic range of signal processing circuit of fiber-optic towed seismic streamer have a significant impact on the quality of seismograms, on reducing the signals detailing, and also on decreasing the amplitudes of the recorded waves (in the presented data, the amplitude decreases by 5 times). The quality of seismograms drops significantly in areas of sharp transition between layers with different densities, which generate the most distinct and strong reflected seismic vibrations. The results obtained are of great practical importance, since they allow evaluating the effect of complex nonlinear distortions of acoustic signals under the conditions of limited dynamic range of the signal processing circuit of a fiber-optic towed seismic streamer on the received seismograms. These results are presented for the first time due to the lack of world analogues of the developed fiber-optic towed seismic streamer. In addition, taking into account the known sensitivity of the fiber-optic hydrophones of the streamer, the constructed model of the signal processing path allows choosing the optimal energy of the acoustic source for seismic exploration with the most efficient use of the dynamic range of the fiber-optic streamer.


In this paper, we propose a new method for synthesizing the control of multi-input multi output linear plants with a guarantee of finding controlled signals in given sets under conditions of unknown bounded disturbances. The problem is solved in two stages. At the first stage, the coordinate transformation method is used to reduce the original constrained problem to the problem of studying the input-to-state stability of a new extended system without constraints. At the second stage, the control law for the extended system is obtained by solving a series of linear matrix inequalities. To illustrate the effectiveness of the proposed method, simulation in the MATLAB/Simulink is given. The simulation results show the presence of controlled signals in the given sets and the boundness of all signals in the control system. The proposed method is recommended for use in control problems where it is required to maintain controlled signals in given sets, for example, control of an electric power network, control of the reservoir pressure maintenance process, etc.
The problem of studying the sensitivity of control processes to parameter variations is considered. To solve the problem, the trajectory sensitivity apparatus was used, the use of which, together with the state-space method, made it possible to construct sensitivity models. Based on the models, ellipsoidal estimates of the trajectory sensitivity functions in terms of the state, output, and error of linear multidimensional continuous systems in the form of majorants and minorants are determined. Calculations are performed using the generalized singular value decomposition of matrices composed of trajectory parametric sensitivity functions. The resulting ellipsoidal estimates, due to the meaningful possibilities of the generalized singular value decomposition of matrices, have the property of minimal sufficiency. The estimation method makes it possible, using the left singular basis corresponding to the extremal generalized singular values, to determine subspaces in the state, output, and error spaces that are characterized at each moment of time by the largest and smallest additional motion in terms of the norm. The right singular basis allows us to determine subspaces in the parameter space that generate the largest and smallest additional motion in the norm. The proposed approach solves the problem of “optimal nominal”, that is, the problem of choosing the nominal value of the parameter vector of the plant units that provide the multidimensional controlled process with the smallest value of the ellipsoidal estimates of the trajectory sensitivity functions, as well as to compare the flow of multidimensional controlled processes according to the ellipsoidal estimates of the trajectory parametric sensitivity.


The nonlinear viscoelasticity of uniaxial oriented polymer materials is considered. To explain the deformation mechanisms of oriented polymers and the possibility of predicting their mechanical behavior in various operating modes, new nonlinear rheological models have been proposed. The application of the simplest rheological model of a real viscoelastic solid to the description and explanation of the recovery process in polymer materials is studied. From the standpoint of rheology, a model of an ideal viscoelastic solid is introduced. Using the balance equation for the number of transitions through energy barriers, a method for calculating the new nonlinear rheological model has been proposed. To eliminate the shortcomings of the ideal viscoelastic solid model associated with the impossibility of predicting creep and stress relaxation modes for long times, a generalized rheological model of a real viscoelastic solid was obtained. In the corrected model, the simplest elements are connected in parallel, which means the presence of not one, but several energy barriers in the materials, the transmission through which have their own relaxation times. To describe the recovery processes in polymer materials, the model of an ideal viscoelastic solid is supplemented by a parallel connected elastic spring. An additional Hooke spring replaces the interfibrillar interaction between the individual elements of the structure and is responsible for possible obstacles during jump-like transitions through the energy barrier. Using the method of rheological models and describing interfibrillar bonds within the framework of the theory of elasticity, a constitutive equation was obtained for the case of recovery processes. Based on the constitutive equation of viscoelasticity for uniaxial oriented polymer materials, a new nonlinear highly elastic element is introduced, which replaces the Maxwell’s element in the theory of linear viscoelasticity. A new rheological model of parallel connection of elastic elements is shown. An explanation of the retardation of the recovery process in polymers is given. The simplest rheological model of a real viscoelastic solid is proposed, in which an elastic spring is responsible for interfibrillar bonds. A constitutive equation is obtained that describes the recovery process in polymers. This equation can be integrated by quadratures and gives a solution that is an analogue of the Newton-Leibniz formula for the proposed model. It is shown that the deformation recovery process in polymer does not depend on the level of initial deformation and the loading method. The obtained result is confirmed by experimental data for polyamide and polyethylene film yarns. When specifying a certain initial level of deformation, generalized recovery curves of these materials are obtained. The proposed simplest rheological model of a real viscoelastic solid makes it possible to predict the reducing properties of polymer materials. And also makes it possible to determine the height of the energy barrier and the magnitude of the elastic modulus in the model. Based on the new rheological models, it is planned in the future to consider the issues of modeling and forecasting different modes of deformation.


In the meantime, speech coding is one of the methods to represent the digital speech signal as in possible fewer bits value and to maintain the quality and its clearness. In omnipresent situations, encryption and examination of speech maintain a crucial role in various acoustic-based coding systems. This paper, using subband and Huffman coding technique, has been used for speech signals description to reduce the occupied by the speech data memory. The amplitude values of the taken speech are segregated after pre-processing, windowing and decomposition techniques. These data are converted into the frequency domain using discrete cosine transform (DCT). Then 90 foremost coefficients have been coded by Huffman method, they contain the most valuable information of speech signals. Signals are segregated then and subband coding techniques applied. To reconstruct the input speech, the taken speech is re-transformed in the form of time-domain applying through inverse discrete cosine transform (IDCT). This experiment is carried out by speech data at 8 kHz with 16 bits/per sample. The SNR (Signal to Noise Ratio) shows the efficiency of this applied technique.
The web-based attacks use the vulnerabilities of the end users and their system and perform malicious activities such as stealing sensitive information, injecting malwares, redirecting to malicious sites without their knowledge. Malicious website links are spread through social media posts, emails and messages. The victim can be an individual or an organization and it creates huge money loss every year. Recent Internet Security report states that 83 % of systems in the internet are infected by the malware during the last 12 months due to the users who do not aware of the malicious URL (Uniform Resource Locators) and its impacts. There are some methods to detect and prevent the access malicious domain name in the internet. Blacklist-based approaches, heuristic-based methods, and machine/deep learning-based methods are the three categories. This study provides a machine learning-based lightweight solution to classify malicious domain names. Most of the existing research work is focused on increasing the number of features for better classification accuracy. But the proposed approach uses fewer number of features which include lexical, content based, bag of words, popularity features for malicious domain classification. Result of the experiment shows that the proposed approach performs better than the existing one.
A simulation model of a computer system built in the Simulink (SimEvent) environment is considered. According to the queuing theory, the system is classified as G/G/n/∞. This means that there are multiple input streams in the system, their queue is infinite, and two feedbacks are applied. These feedbacks reflect the situation of the repeated processing in case of failure or lack of a solution at the first processing attempt. The system architecture under consideration is focused on parallel processing of a certain class of tasks, while the tasks themselves are data-independent. The model is investigated for uniformly distributed and exponential input streams. The situation of continuous streams for several types of tasks is considered, for which priorities and the numbers of partitioning fragments vary. The number of fragments determines the degree of parallelism in the execution of the task. The paper shows a method for automatically determining the optimal number of task fragments to guarantee its completion within the target period. The use of sporadic control mechanisms for a number of the task fragments received in a continuous stream and the priorities managing of each of the task fragments are proposed. The proposed mechanism of the sporadic management made it possible to significantly speed up the tasks completion within the target deadline. As a result, the load on the computing system has been reduced and the efficiency of its operation has been increased. The use of the proposed algorithms significantly simplifies the scheduling mechanisms in the computer system, which allows you to exclude the scheduler.
Methods of local features extraction in person authentication task by face thermographic image
Nikita I. Belov, Maxim A. Ermak, Evgeny A. Dubinich, Kouznetsov Alexander Yu.
The paper presents a methods of image local features extraction research in relation to people authentication problem by face thermogram. As a part of the study, there were formed two datasets for methods training and testing: photographic images and face images in the long-wavelength infrared specter (LWIR) with various factors. The novelty of this study is due to the approach to collecting datasets to verify the accuracy of authentication methods. The dataset was collected with more realistic conditions that affect the quality of authentication, such as changing facial expressions, wearing glasses, medical masks, applying make-up/cosmetics, changing the lighting and temperature conditions of the environment, rotating the head. The methods core is based on the idea of constructing a vector of image features while reducing the dimension and highlighting the boundaries. Likewise, the methods of this group cope well with extracting features task on images and are widely used in the tasks of authentication by 2D face image, as well as in other computer vision tasks. In this paper, four classical methods of local feature extraction are considered: the method of locally binary templates, Gabor wavelets, scale-invariant feature transformation, and Weber’s local descriptor. The classifiers for the feature vectors comparison in this research are SVM and the simplest Perceptron — the basic methods of machine learning. As part of the study, a comparative analysis of each method was carried out in relation to the collected datasets. The methods were trained and tested on a collected face dataset of over 632,000 images of 152 people. As a result of the comparative analysis, it can be concluded that the method of local binary features demonstrates the best result among the considered methods for both types of data: for face thermograms (for SVM — 0,57, for Perceptron — 0,58), for photographic images (for SVM — 0,71, for Perceptron — 0,73). Furthermore, the SIFT method showed similar results: for face thermograms (for SVM — 0,58, for Perceptron — 0,55), for photographic images (for SVM — 0,72, for Perceptron — 0,74). Gabor filters and Weber local descriptor application demonstrate a low accuracy rate in the authentication task by both types of data. The results of the work can be used in access control and management systems to increase the fault tolerance of person authentication. The appliance of the considered methods are effective in the tasks such as processing thermograms for authentication a person by so-called “secondary” signs, for example, by the veins and vessels on face patterns, in cases of facial expressions and appearance changes.
Classification of short texts using a wave model
Anastasia S. Gruzdeva, Bessmertny Igor Alexandrovich
Quantum computing algorithms are actively developed and applied in the field of natural language processing. The authors of the paper proposed a new quantum-like method for classifying short texts. The basis of the method is the representation of the text as an ensemble of elementary particles. The value of the detection probability amplitude of a given ensemble at the selected points in space is chosen as a classification criterion. In this case, the space is understood as a vector space described using the distributive-semantic model of the language. The authors suggested one of the possible ways of interpreting the parameters of the wave function that describes the behavior of an elementary particle, as well as an algorithm for calculating the probability amplitude taking into account these parameters. For the experimental research of the described method, authors performed the classification of Internet communities by topics. For the analysis, the names and the “information” section of communities were used. In total, 100 groups of the social network “VKontakte” belonging to five various topics were taken. The proposed model showed rather high classification accuracy (91 % in general on the data set and from 75 % to 95 % within individual classes). The proposed model is supposed to be used to classify user comments about goods, services and events, as well as to determine some properties of the psychological portraits of users of online communities.
Algorithm for energy-efficient interaction of wireless sensor network nodes
Tatiana M. Tatarnikova, Farabi Bimbetov, Elena V. Gorina
The actual problem of developing methods of interaction in wireless sensor networks focused on energy saving is discussed. It is shown that the operation of a wireless sensor network is built taking into account compromise mechanisms that make it possible to extend the life of the network in the presence of low-power sensor nodes on which the network is built. It is concluded that it is necessary to introduce new algorithms into the operation of wireless sensor networks, which make it possible to reduce the number of operations when calculating a route, transmitting data, or other operations without losing functionality, but contributing to a reduction in energy consumption. The paper proposes one of such algorithms that develops the idea of clustering wireless sensor networks in order to reduce the power consumption of sensor nodes by transferring some of the functions to the head nodes of the clusters. Unlike the well-known adaptive clustering algorithm with low energy consumption LEACH, the proposed algorithm is based on swarm intelligence and allows choosing not only the head nodes of clusters in the current round of functioning of the wireless sensor network, but also promising nodes that become heads of clusters in subsequent rounds. If we consider that one cycle of the wireless sensor network consists of a certain predetermined number of rounds, then the procedure for searching for cluster heads can be performed not at the beginning of each round, but only at the beginning of each cycle of the wireless sensor network. It is shown that the determination of the heads of wireless sensor network clusters in the future allows to reduce the total energy consumption and thereby increase the duration of the network life cycle. The advantage of adding the bee swarm algorithm to the wireless sensor network clustering procedure is demonstrated in terms of such indicators as the time of death of the first sensor node, the dependence of the number of functioning nodes on the network operation time and the data packet delivery coefficient. The wireless sensor network clustering procedure with the addition of the bee swarm algorithm to select cluster heads for the future can be useful when deploying a wireless sensor network in real applications.
Auxiliary arbitrary waveform generator for fiber optic gyroscope
Vladimir N. Kuznetsov, Elisey V. Litvinov , Evgenii V. Vostrikov , Ivan G. Deyneka
The number of Twitter users in Iraq has increased significantly in recent years. Major events, the political situation in the country, had a significant impact on the content of Twitter and affected the tweets of Iraqi users. Creating an Iraqi Arabic Dialect corpus is crucial for sentiment analysis to study such behaviors. Since no such corpus existed, this paper introduces the Corpus of Iraqi Arabic Dialect (CIAD). The corpus has been collected, annotated and made publicly accessible to other researchers for further investigation. Furthermore, the created corpus has been validated using eight different combinations of four feature-selections approaches and two versions of Support Vector Machine (SVM) algorithm. Various performance measures were calculated. The obtained accuracy, 78 %, indicates a promising potential application.
Constructing twitter corpus of Iraqi Arabic Dialect (CIAD) for sentiment analysis
Mohammed M. Hassoun Al-Jawad, Hasaneun Alharbi, Ahmed Almukhtar, Anwar A. Alnawas
The number of Twitter users in Iraq has increased significantly in recent years. Major events, the political situation in the country, had a significant impact on the content of Twitter and affected the tweets of Iraqi users. Creating an Iraqi Arabic Dialect corpus is crucial for sentiment analysis to study such behaviors. Since no such corpus existed, this paper introduces the Corpus of Iraqi Arabic Dialect (CIAD). The corpus has been collected, annotated and made publicly accessible to other researchers for further investigation. Furthermore, the created corpus has been validated using eight different combinations of four feature-selections approaches and two versions of Support Vector Machine (SVM) algorithm. Various performance measures were calculated. The obtained accuracy, 78 %, indicates a promising potential application.
Problems of Wireless Sensor Networks (WSN) are associated with a significant increase in the number of devices on these networks. In this regard, the requirements for the protection and the security of WSN from external influences are increasing significantly. WSN security problems are solved by solving the problem of optimal path routing, energy conservation, and so on. This paper proposes a hybrid model of an efficient packet routing and delivery system to prevent Black-hole attacks. This type of attack is considered the most common on the network due to its unique characteristics. To detect such attacks, a deep learning model using a Convolutional Neural Network (CNN) is proposed. The learning algorithm must be reliable and trustworthy so that attack analysis can be considered at different levels to study the intelligent behavior of network attacks. The paper considers the problem of finding the optimal shortest path using Deep Q-Learning and convolutional neural networks to perform efficient routing and delivery of packets in a safer way. As a result of simulation, the achieved accuracy reached 98.57 %.
Modern variations of McEliece and Niederreiter cryptosystems
Davydov Vadim Valerievich, Vladislav V. Beliaev, Elizar F. Kustov, Anton G. Leevik, Bezzateev Sergey V
Classical cryptosystems proposed by Robert McEliece (1978) and Harold Niederreiter (1986) and their modern variations are studied. A detailed review of five code-based public key cryptosystems has been presented. It is shown that some of the modern interpretations of the classical McEliece and Niederreiter cryptosystems have significant issues. In particular, it has been established that the XGRS cryptosystem based on extended Reed-Solomon codes does not provide the declared level of security against the information set decoding attack, and also has a number of inaccuracies. It is shown that the time of key generation and decryption in modern cryptosystems is quite large, and the public and private keys take up a large amount of memory. The inaccuracies of the considered schemes revealed in this work can be used to improve and adjust the systems, as well as to build a more accurate assessment of their security level and efficiency. The presented cryptosystems can be considered as standards for post-quantum cryptography and can be used to protect data after development of powerful quantum computers.
The paper deals with Wireless Sensor Networks (WSN) registered in a specific Internet of Things (IoTʼs) network that have different kind of applications. They come into use once they are successfully registered within a specific IoT network. Elliptic Curve Cryptography (ECC) with Token based Security Scheme is proposed here for secured and authenticated communication. A lightweight authentication mechanism is proposed also in order to prevent the network from the unauthorized access. Network nodes are generated with token keys immediately after login, and the gateway generates token ID for each individual nodes. Then elliptical curve cryptography is applied to remove the malicious nodes completely if some adversaries missed out during token key verification process. If the user needs to access the data, he must go through the token key generation and verification phase, as well as through the data integrity and transmission phase.


Model of the acoustic path of a separatecombined optical-acoustic transducer
Fedorov Alexei V, Vladimir A. Bychenok, Igor Berkutov, Irina E. Alifanova
Ultrasonic testing methods occupy one of the key positions in flaw detection, structurescopy, in assessing the strength characteristics of materials and the stress-strain state of products. The method is based on the phenomenon of acoustoelasticity and makes it possible to control the stress-strain state of products by changing the propagation velocity of a longitudinal subsurface ultrasonic wave. To excite acoustic waves, a separate-combined optical-acoustic transducer and a laser-ultrasonic flaw detector are used. The design of a separate-combined optical-acoustic transducer should ensure the measurements accuracy of the time it takes for a longitudinal subsurface wave to reach the receiver of acoustic oscillations. To analyze the recorded acoustic signals and extract from them the signal of a longitudinal subsurface wave, in this work, a finite element model of the acoustic path of a dual-coupled optical-acoustic transducer is proposed and developed. The finite element model was implemented in the COMSOL Multiphysics software package using an explicit solver based on the discontinuous Galerkin method. The developed finite element model makes it possible to visualize the displacement fields of acoustic oscillations, obtain A-scans, and calculate the time of arrival of a longitudinal subsurface wave at the receiver of the optical-acoustic transducer. The calculated values of the arrival time of a longitudinal subsurface wave at the receiver of an optical-acoustic transducer are compared with the results of a full-scale experiment. Calculations and full-scale experiments were performed for steel plates of various thicknesses. The adequacy of the model was confirmed using the Fisher criterion (F-measure). The A-scans obtained as a result of the simulation made it possible to identify the signals recorded by the optical-acoustic transducer: the signal of the longitudinal subsurface wave, the signals of the head and reflected transverse waves, and the intrinsic noise of the optoacoustic transducer. The developed model makes it possible to single out the signal of the longitudinal subsurface wave among the recorded signals of the optical-acoustic transducer. The proposed model can be used in the design of new optical-acoustic transducers, as well as in non-destructive testing (NDT) and materials science.
In large cities, the coverage and quality of telecommunication services inside buildings is low and needs more effort for its improvement. Signal attenuators like number of walls, number of floors and others affect the quality of received signal. Therefore, femtocells are used for improving the poor performance of network inside buildings and other dead zone areas. The aim of this work is to compare the received signal strength of femtocell with rectangular and circular microstrip patch antenna designed at 2.55 GHz. This work considers the received signal strength of femtocell with the designed rectangular and circular microstrip patch antennas. The performance analysis uses Multi Wall Multi Floor indoor propagation model. The result showed that the total gain of rectangular and circular microstrip patch antenna are 3.6528 dB and 2.924 dB, respectively. The received signal strength of femtocell with rectangular microstrip patch antenna is larger than that of circular microstrip patch antenna designed at the same frequency. The effect of number of walls and floors on the received signal strength of femtocell is also clearly indicated. Generally, it is found that much higher received signal strength is observed in rectangular microstrip patch antenna than in circular one. The outcome of this work shows that femtocells are capable of enhancing signal quality for indoor users.
Whirlpool Hash Mutual Biometric Serpent Authentication (WPHMBSA) for secured data access in cloud environment
Krishnan Mohana Prabha, Perumal Raja Vidhya Saraswathi, Balamurali Saminathan
 Cloud systems allow data sharing capabilities for providing several benefits to users and organizations. However, authentication accuracy (AA) was not improved, and time consumption was not reduced. To increase authentication accuracy, Whirlpool Hash Mutual Biometric Serpent Authentication (WPHMBSA) Technique is designed to access data on a server in a secured manner. During the registration process, users’ data are registered and stored on the server. After registering, the cloud server generates an ID and password for every registered user. For authentication, the user needs to login with an ID and password to the cloud server. During authentication, WPHMBSA Technique authenticates the biometric keys of the users. When a user is legitimate, WPHMBSA Technique confirms their authenticity to the server. Experimental evaluation of the WPHMBSA Technique and existing methods are performed by various parameters with the amount of cloud user’s information. The experimental results show that the WPHMBSA Technique obtains high accuracy and confidentiality rate within minimum time.
Mobile Ad hoc Networks (MANET) are structure less, autonomous wireless networks with mobile nodes that dynamically establish data transmission connections. Due to dynamic topological change, MANET routes are unbalanced and break repeatedly. Hence, providing efficient and reliable data delivery with effective utilization of network resources is a challenging issue to be considered in MANET. This paper proposes an instant-runoff Ranked Decision Forests Probit Regression-based Connectionist Multilayer Deep Neural Network (IRDFPR-CMDNN) for efficient data transmission and higher data delivery with a minimum end-to-end delay. This IRDFPR-CMDNN method performs route identification, data delivery, and route maintenance with more than three layers. Then the mobile nodes are sent to the input layer of the Connectionist Multilayer Deep Neural Network. In hidden layer 1, the Instant-runoff Ranked Decision Forests algorithm is applied for classifying the mobile nodes depending on the residual energy and load capacity. With selected mobile nodes, the Probit Regression is applied for finding the nearest neighboring nodes in the second hidden layer based on the link quality and received signal strength for route path establishment. Then multiple paths for routing are established from source to destination node and start to perform the data transmission. If link failure occurs during the data transmission, another alternative route with better link quality is selected for routing. In this way, energy-efficient data transmission is performed from source to destination with a higher data delivery rate and minimal time consumption. Experimental evaluation is carried out on energy consumption, packet delivery ratio, packet drop rate, throughput, and end-to-end delay with varying numbers of mobile nodes and data packets. Simulation results show that the IRDFPR-CMDNN technique effectively enhances data delivery, throughput and minimizes energy consumption, packet loss rate, delay with respect to conventional methods.
The electromagnetic-acoustic method is applied in the control of the electrically conductive products thickness. This method is based on the electrodynamic interaction of eddy currents induced in an electrically conductive material with an external magnetic field. Acoustic waves are generated with multiple reflections from the media interface. The recorded signal reflected allows determining the product thickness. Electromagnetic-acoustic transducer includes a magnetic system, generating and receiving coils. Thickness measurement accuracy of the control object is determined by the geometry of the generating and receiving coils, as well as the size of the gap between them. To assess this effect by the experimental data is rather difficult task. The acoustic wave propagation numerical model in a plate with electromagnetic-acoustic thickness measurement is proposed and developed for the problem solution. The numerical model is implemented in the COMSOL Multiphysics software environment using a discontinuous high-order Galerkin method with time explicit integration scheme. Model adequacy was confirmed using the results of a full-scale experiment, for which a specialized electromagnetic-acoustic thickness gauge with a transducer and a thickness gauge were used. To estimate the uncertainty of thickness measurements, an array of values of the received signal was processed in the MathCad software environment. The adequacy of the model has been confirmed by comparing the simulation results with a full-scale experiment. The influence of the transducer design on the thickness measurement accuracy was estimated. Conclusions are drawn, as well as general recommendations for the development of an electromagnetic-acoustic transducer, and methods for object thickness measuring are given based on the investigation results. The results can be used in the design of an electromagnetic-acoustic transducer and in the development of thickness measurement techniques.
Detection of quadcopter propeller failure by machine learning methods
Ivan I. Kirilenko, Ekaterina A. Kosareva, Aleksandr A. Nikolaev, Artemii M. Zenkin, Iana M. Selezneva, Nikolaev Nikolay A.
The paper presents a study of options for detecting a failure or defect in the propeller of an unmanned aircraft system (quadcopter) using machine learning methods. An original accuracy evaluation of the known algorithms using in practice the data obtained from the quadcopter in its flight conditions is performed. The proposed method is based on the classification of three propeller states (serviceable propellers, one propeller artificially deformed, one propeller broken) using machine learning algorithms. The input information is the data obtained from the quadcopter measuring system in real time: speed, acceleration and rotation angle relative to three axes. For the correct work of the presented algorithm, data was preprocessed with division into time intervals and applying to the obtained intervals the fast Fourier transform. Based on the processed data, machine learning algorithms were trained using the reference vector method, k-nearest neighbor algorithm, decision tree algorithm, and multilayer perceptron. The obtained accuracy values of the proposed methods are compared. It is shown that the application of machine learning methods can detect and classify the propeller states with an accuracy of up to 96 %. The best result is achieved using the decision tree algorithm. The results of the study can be of practical importance for real-time systems to detect propeller defect and breakage for unmanned aerial vehicles. It is possible to predict with high accuracy the propeller wear; it is possible to improve the stability and safety of the flight.
The paper discusses wireless cellular radio communication systems along highways using Multiple-input Multiple-output (MIMO, multiple transmit / receive antennas) technologies, in particular spatial multiplexing and diversity reception, and also proposes a model for assessing the potential gains in a multi-antenna system from MIMO, which takes into account the multipath of the channel and the relative orientation of the antennas. The work was carried out by transferring the results of the calculated correlation matrix from stochastic channel model into the physical layer simulator of the cellular system protocol. A methodology for evaluating the performance of multi-antenna systems using spatial multiplexing for roadside cellular networks has been developed. The correlation properties of the channel between the antennas of the two roadside units (RSU), each of which has two perpendicular linearly polarized antennas and a user terminal with the same two orthogonally polarized antennas, have been investigated. A prediction scheme for the type of correlation matrix has been developed, which makes it possible to more accurately set the correlation matrix in simulators of the physical layer. The obtained results showed that for properly designed systems the throughput will be close to the throughput of low spatial correlation, and the case of high correlation proposed by the standard does not need to be modeled. It is also shown that the channels between Tx / Rx pairs that undergo similar polarization changes (the same relative spatial rotation of the antennas) will be strongly correlated, which must be taken into account when developing MIMO systems.
Visual display system of changes in physiological state for patients with chronic disorders
Svetlana A. Vostrikova, Kira O. Pogorelova, Daniil S. Shiryaev, Ivan S. Polukhin, Yurii S. Andreev, Irina G. Smirnova, Ekaterina A. Kondratieva, Bougrov Vladislav E.
There is an acute issue of patients recovering with chronic disorders in medicine of critical conditions — they are a vegetative and minimally consciousness state. Such states develop in patients after coma and are characterized by the presence of wakefulness with complete or almost complete absence of signs of purposeful behavior. Capturing small reflexes and body signs allows observing changes in the physiological state of the patient, the effectiveness of the treatment course and requires a continuous process, which is not always possible considering to the high workload of medical staff. In this regard, visual display and data transmission systems for patients are more efficient. The display device should be safe, mobile and not restrict the patient’s movement, should not have a negative impact on the patient. The display method should be intuitively understandable and clear for prompt decision-making. The main parts of the visual system are sensors of physiological signs on the patient’s body, a control module, a visual display object, and a data transmission module. RGB Arlight 5060 LED strips with pixel addressing on a textile substrate are used as display elements. The object of the visual display consists of three layers: a duvet cover on the underside, a wool blanket, a cotton backing with LED strips. The paper presents an assessment of the heating of the proposed visual display system for patients with chronic disorders and a comparison of the results with the maximum permissible heating of human skin. The heating estimation was performed by COMSOL Multiphysics cross-platform finite element analysis, solver, and multiphysics simulation. The simulation results are compared with an experimental study using a contact temperature meter Centre 304 and an infrared camera Optris PI 640i. The security of the system has been confirmed. Scenarios for the indicators to reduce the thermal impact are proposed. The proposed visual display system may be relevant when monitoring the physiological state of patients in Intensive Care Units. The implementation of the system will help to see changes in the physiological state and increase the chances of getting out of a chronic condition in the future.


A method of arm aiming direction estimation for low performance Internet of Things devices is proposed. It uses Human Pose Estimation (HPE) algorithms for retrieving human skeleton key points. Having these key points, arm aiming directions model is calculated. Two well-known HPE methods (PoseNet and OpenPose) are examined. These algorithms have been tested and compared by the average angle of error. The system includes a Raspberry Pi 4B single-board computer and an Intel RealSense D435i depth sensor. The developed approach may be utilized in “smart home” gesture control systems.
Copyright 2001-2022 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.