Summaries of the Issue
Structural and spectral properties of YAG:Nd, YAG:Ce and YAG:Yb nanocrystalline powders synthesized via modified Pechini methodMoussaoui Amir , Bulyga Dmitry V. , Ignatiev Alexander I , Sergei K. Evstropiev, Nikonorov Nikolay V.
Synthesis of nanocrystalline yttrium-aluminum garnet doped with neodymium was performed via modified Pechini methods. Evolution of material during synthesis was studied using differential thermal analysis; the structure and morphology of synthesized nanopowders were studied using scanning electron microscopy and x-ray diffraction. It was shown that the use of an additional low-temperature stabilizer leads to formation of crystalline yttrium-aluminum garnet phase at lower temperatures. It was shown that the formation of nanocrystals occurs at the temperature of about 883 °C. Obtained powders can be used as precursors for ceramics sintering or be introduced into the optical fiber in order to fabricate optical amplifiers.
Computational prediction in the problem of stereo image identificationSamoilenko Marina V., Hachikian Vladimir A.
The paper examines the issues of increasing the efficiency and reliability of stereo image identification through computational prediction of the position and size of the uncertainty zone in which the desired correspondence point is known to be located. A control point is selected on one of the stereo images, for which it is necessary to find a correspondence point on the second stereo image. Based on the known parameters of the stereoscopic television system and the coordinates of the control point, using the mathematical apparatus proposed in the work, the coordinates of the boundaries of the uncertainty zone on the second stereo image are calculated. The second point of correspondence is found by the search procedure by comparing identical small areas with centers in the control point on the first stereo image and in the points of the uncertainty zone on the second; the comparison is made according to the criterion of minimum quadratic mismatch of intensities. The necessary a priori information for implementing the method is the maximum heights of the relief displayed on stereo images. The ratios of linear dimensions on a flat relief and on an image formed according to the principle of central projection were obtained. Relationships have been obtained that make it possible to obtain, by calculation, the coordinates of the correspondence points and the stereoscopic mismatch for stereo images of a flat relief. For stereo images of a volumetric relief, calculation formulas are obtained for determining the boundaries of the zone of uncertainty in the second stereo image within which the search for the point of correspondence is carried out. The correctness and performance of the obtained relationships are confirmed by computer modeling. Limiting the size of the search area by means of calculated prediction of the uncertainty zone makes it possible to reduce the computational and time costs of the search procedure. Due to this, the efficiency of identifying stereo image points increases and the likelihood of false identification decreases.
Comparison of application results of two speckle methods for study multi-cycle fatigue of structural steelVladimirov Alexandr P., Kamantsev Ivan S. , Drukarenko Nikita A., Myznov Konstantin E. , Naumov Konstantin V.
The new method of time-averaged speckle images and the well-known method of speckle-field interference studied the development of plastic deformations occurring during multicycle fatigue of structural steel. The correctness of strain determination by the new method is evaluated by comparing the data obtained by the two methods. Two optical systems including laser modules with different wavelengths were investigated. The proposed optical system allows to determine the components Δuy, Δuz of the relative displacement vector of two surface points located at a distance (measurement base) Δs = 66 μm. The known scheme makes it possible to describe deformations by the traditional method on the base of 470 μm. The object of study was a flat specimen made of 09Г2С steel with two side notches. Fatigue tests were carried out on a resonance-type machine at different cycle amplitudes. It is shown that at all cycle amplitudes the development of plastic deformations occurs by the mechanism of cyclic creep. There is a good correlation between the data obtained by two different speckle methods. At the same time, the strain estimated by the new method is in some cases an order of magnitude higher than the strain calculated by the known method. Obviously, this is due to the existence of local small- sized (of the order of 101 µm) strain areas which cannot be measured by conventional methods. The ultimate tensile strain Δuy/Δs calculated by the new method is of the order of 10–1, which coincides with the similar strain occurring in tensile testing of standard specimens. The results obtained by the new method justify the need of developing sensors and nondestructive testing devices of a new generation, allowing to estimate the time to fatigue crack initiation by the rate of change of physical quantities and by their limit values.
Laser-induced thermal effect on the electrical characteristics of photosensitive PbSe filmsOlkhova Anastasiia A. , Patrikeeva Alina A., Butyaeva Maria A. , Pushkareva Alexandra E. , Avilova Ekaterina A. , Moskvin Mikhail K. , Sergeev Maxim M. , Veiko Vadim Pavlovich
The paper presents a study of the effect of laser irradiation of crystalline chalcogenide films of lead selenide (PbSe) on their electrical characteristics caused by irreversible modification of the structure due to valence reconfiguration of lead as a result of its oxidation. The study of the modification features of the electrical properties of the films was carried out because of laser exposure to nanosecond pulses with a wavelength of 1064 nm. Measurements of the electrical characteristics of PbSe films were carried out using the four-probe method. It was shown that when the current was directed parallel to the laser tracks recorded in the darkening mode, the resistance of the modified film decreased by 44 % compared to the original sample, and with the perpendicular direction of the current, the resistance increased by 153 %. The resistance of the film increased more than 27 times after laser irradiation in the bleaching mode, regardless of the direction of the current relative to the laser tracks. The experimentally measured temperature and its gradient along the laser spot on the film in the darkening and bleaching modes turned out to be in good agreement with the proposed mathematical model of the thermal effect of laser pulses. It has been shown that the processes of laser modification of films occur at lower temperatures than during standard heat treatment in a furnace. The obtained results can be applied in the development of photodetectors in the middle IR range of the spectrum based on PbSe film.
Homograph recognition algorithm based on Euclidean metricIzrailova Elisa S. , Astemirov Arslanbek V. , Badaeva Ayshat S. , Sultanov Zelimhan A., Umarkhadzhiev Salaudin M., Khekhaev Mokhmad-Salekh L. , Yasaeva Madina L.
The problem of resolving the uncertainties associated with homonymy for the Chechen language has become especially relevant after the creation of speech synthesis systems. The main disadvantage of speech synthesizers in the Chechen language are errors in reading homograph words that differ in the length / brevity of vowels — the longitude of such sounds is not displayed in any way when writing. The reproduction of diphthongs, which are indicated on the letter in the same way as monophthongs close to them in sound, causes problems. To improve the quality of synthesized speech in the Chechen language, an automatic homograph recognition program is needed. To solve this problem, the article considers the task of eliminating the ambiguity of the meaning of the words WSD (Word Sense Disambiguation). Algorithmic (supervised) methods based on a pre-marked database have been selected for the Chechen language. These methods are the most common solutions for eliminating the ambiguity of the meaning of words. The implementation of such methods is possible in the presence of large marked-up corpora that are inaccessible to most languages of the world including Chechen. The Chechen language belongs to low-resource languages for which the optimal approach from the point of view of saving labor and time resources is a semi-controlled hybrid method of homograph recognition based on the use of algorithmic and statistical methods. The algorithm created by the authors for recognizing homographs by six adjacent words in a sentence is presented. The method is implemented as a program. Preliminary preparation of the initial data for the operation of the algorithm includes marking of proposals by the values of homographs performed “manually”. The results of the program were evaluated using generally recognized accuracy metrics and amounted to F1 — 39 %, Accuracy — 45 %. A comparative analysis of the data obtained with the results of other methods and models showed that the accuracy of the algorithm presented in this article is closest to the results of the accuracy of algorithms based on the Lesk method. Using Lesk method for English, the results of F1 accuracy were obtained — 41.1 % (simple Lesk) and 51.1 % (extended Lesk). Methods using neural network algorithms provide higher WSD accuracy rates for most languages; however, their implementation requires large data bodies, which is not always available for low-resource languages, including Chechen.
An improved performance of RetinaNet model for hand-gun detection in custom dataset and real time surveillance videoKhin Pyone Pyone , Htaik Nay Min
The prevalence of armed robberies has become a significant concern in today’s world, necessitating the development of effective detection systems. While various detection devices exist in the market, they do not possess the capability to automatically detect and alarm the presence of guns during robbery activities. In order to address this issue, a deep learning-based approach using gun detection using RetinaNet model is proposed. The objective is to accurately detect guns and subsequently alert either the police station or the bank owner. RetinaNet, the core of the system, comprises three main components: the Residual Neural Network (ResNet), the Feature Pyramid Network (FPN), and the Fully Convolutional Networks (FCN). These components work together to enable real-time detection of guns without the need for human intervention. Proposed implementation uses a custom robbery detection dataset that consists of gun, no-gun and robbery activity classes. By evaluating the performance of the proposed model on our custom dataset, it is evident that the ResNet50 backbone architecture yields outperforms for the accuracy in robbery detection that reached in 0.92 of Mean Average Precision (mAP). The model effectiveness lies in its ability to accurately identify the presence of guns during robbery activities.
Solving the problem of preliminary partitioning of heterogeneous data into classes in conditions of limited volumeSharamet Andrei V.
In the context of the formation of heterogeneous data that differ significantly in nature, even of a small volume, it becomes necessary to analyze them for decision-making. This is typical for many high-tech industrial fields of human activity. The problem can be solved by bringing heterogeneous data to a single view and then dividing it into clusters. Instead of searching for a solution for each data element, it is proposed to use the division of the entire set of normalized data into clusters, and thereby simplify the process of isolating the cluster and making a decision on it. The essence of the proposed solution is the automatic grouping of objects with similar data into clusters. This allows you to reduce the amount of analyzed information by combining a lot of data and perform mathematical operations already for the cluster. When splitting, it is proposed to use the theory of fuzzy logic. The possibility of such an approach is due to the fact that different objects always have several characteristics by which they can be combined. These signs are often not obvious and are poorly formalized. A hierarchical modification of the AFC fuzzy clustering method based on the operation (max- min) of the fuzzy similarity ratio is proposed. The basic concepts and definitions of the proposed method of automatic partitioning of a set of input data, a step-by-step scheme of the corresponding cluster procedure are considered. The efficiency of the proposed method is demonstrated by the example of solving the problem of forming a traffic flow. A numerical experiment has shown that the developed algorithm allows you to automatically analyze heterogeneous data and stably divide them into classes. The application of the proposed modification allows for the preliminary partitioning of data into clusters and allows reducing the volume of analyzed data in the future. There is no need to consider the objects in each case separately.
Correction of single error bursts beyond the code correction capability using information setsIsaeva Maria N. , Andrei A. Ovchinnikov
The most important method of ensuring data integrity is correcting errors that occur during information storage, processing or transmission. The error-correcting coding methods are used to correct errors. In real systems, noise processes are correlated. However, traditional coding and decoding methods use decorrelation, and it is known that this procedure reduces the maximum achievable characteristics of coding. Thus, constructing computationally efficient decoding methods that would correct grouped errors for a wide class of codes is an actual problem. In this paper the decoding by information sets is used to correct single bursts. This method has exponential complexity when correcting independent errors. The proposed approach uses a number of information sets linearly growing with code length, which provides polynomial decoding complexity. A further reduction of the number of information sets is possible with the proposed method of using dense information sets. It allows evaluating both the set of errors potentially corrected by the code and the characteristics of the decoder. An improvement of the decoding method using an error vector counter is proposed, which allows in some cases to increase the number of corrected error vectors. This method allows significantly reducing the number of information sets or increasing the number of corrected error vectors according to the minimum burst length criterion. The proposed decoders allow correction of single error bursts in polynomial time for arbitrary linear codes. The results of experiments based on standard array show that decoders not only correct all errors within the burst correcting capability of the code, but also a significant number of error vectors beyond of it. Possible directions of further research are the analysis of the proposed decoding algorithms for long codes where the method of analysis based on the standard array is not applicable; the development and analysis of decoding methods for multiple bursts and the joint correction of grouped and random errors.
A novel strategic trajectory-based protocol for enhancing efficiency in wireless sensor networksGopalakrishnan Rangaraj, Angamuthu Senthil Kumar
This research presents a comprehensive approach to enhance the efficiency and performance of Wireless Sensor Networks (WSNs) by addressing critical challenges, such as race conditions, reservation problems, and redundant data. A novel protocol combining Self-Adaptive Redundancy Elimination Clustering and Distributed Load Bandwidth Management is proposed to mitigate these challenges. The work intelligently extracts transmission hops and any-cast transmission features from diversity traffic information obtained through trace files, to eliminate nodes harboring redundant data. To optimize network organization, the number of clusters is dynamically adjusted according to the node density using the affinity propagation technique. Furthermore, load balancing is achieved by reallocating available bandwidth through bandwidth re-segmentation. The research also delves into the Proposed Network Infrastructure and Channel Coordination. The architecture encompasses cooperative clustering of nodes, strategic access point selection, data compression, and channel migration. By fostering collaboration among nodes within clusters, selecting access points judiciously, and employing efficient data compression techniques, the network overall efficiency is significantly improved. Channel migration strategies further bolster the network agility and responsiveness. The integration of Channel Sensing enriches the approach by collecting channel state information, enriched with spatial and temporal node information. This added insight empowers the network to make more informed decisions regarding channel allocation and coordination contributing to reduced interference and optimized data transmission. As a result of the work, the proposed methodology achieves remarkable results, including an average Packet Delivery Ratio of 99.1 % and an average reduction of packet loss by 4.3 % compared to existing studies. Additionally, the proposed protocol exhibits an average throughput improvement of 4.7 % and reduces average network delay to 52 milliseconds highlighting its significant contributions to the enhancement of WSN performance.
Automation of complex text CAPTCHA recognition using conditional generative adversarial networksZadorozhnyy Alexander S., Anastasia A. Korepanova , Maxim V. Abramov, Sabrekov Artem A.
With the rapid development of Internet technologies, the problems of network security continue to worsen. So, one of the most common methods of maintaining security and preventing malicious attacks is CAPTCHA (fully automated public Turing test). CAPTCHA most often consists of some kind of security code, to bypass which it is necessary to perform a simple task, such as entering a word displayed in an image, solving a basic arithmetic equation, etc. However, the most widely used type of CAPTCHA is still the text type. In the recent years, the development of computer vision and, in particular, neural networks has contributed to a decrease in the resistance to hacking of text CAPTCHA. However, the security and resistance to recognition of complex CAPTCHA containing a lot of noise and distortion is still insufficiently studied. This study examines CAPTCHA, the distinctive feature of which is the use of a large number of different distortions, and each individual image uses its own different set of distortions, that is why even the human eye cannot always recognize what is depicted in the photo. The purpose of this work is to assess the security of sites using the CAPTCHA text type by testing their resistance to an automated solution. This testing will be used for the subsequent development of recommendations for improving the effectiveness of protection mechanisms. The result of the work is an implemented synthetic generator and discriminator of the CGAN architecture, as well as a decoder program, which is a trained convolutional neural network that solves this type of CAPTCHA. The recognition accuracy of the model constructed in the article was 63 % on an initially very limited data set, which shows the information security risks that sites using a similar type of CAPTCHA can carry.
Deep attention based Proto-oncogene prediction and Oncogene transition possibility detection using moments and position based amino acid featuresVijayalakshmi Manickam , Vallinayagi Mahesh
The loss of the regulatory function of tumor suppression genes and mutations in Proto-oncogene are the common underlying mechanisms for uncontrolled tumor growth in the varied complex of disorders known as cancer. Oncogene can be curable by means of diagnosing and treating the possibilities of Proto-oncogene at earlier stages. Recently, machine learning approaches helps to focus and provide information about the possibilities of Proto-oncogene that may change into oncogene in different cancer types. This study helps to diagnose the possibilities of Proto-oncogene which are possible to change oncogenes at earlier stage. Thus, this present study proposed an efficient unique predictor of Proto- oncogene with the help of Bi-Directional Long Short Term Memory added with attention concept. This approach also find the probability of Proto-oncogene to oncogene using statistical moments, position based amino-acid composition representation and deep features extracted from the sequence. Consequently, this study suggests that using a K-Nearest Neighbor classifier it is possible to find probability of changing from Proto-oncogene to cancerous oncogene.
A method of storing vector data in compressed form using clusteringTomilov Nikita A. , Turov Vladimir P. , Babayants Alexander A., Alexey V. Platonov
The development of the machine learning algorithms for information search in recent years made it possible to represent text and multimodal documents in the form of vectors. These vector representations (embeddings) preserve the semantic content of documents and allow the search to be performed as the calculation of distance between vectors. Compressing embeddings can reduce the amount of memory they occupy and improve computational efficiency. The article discusses existing methods for compressing vector representations without loss of accuracy and with loss of accuracy. A method is proposed to reduce error by clustering vector representations using lossy compression. The essence of the method is in performing the preliminary clustering of vector representations, saving the centers of each cluster, and saving the coordinate value of each vector representation relative to the center of its cluster. Then, the centers of each cluster are compressed without loss of accuracy, and the resulting shifted vector representations are compressed with loss of accuracy. To restore the original vector representations, the coordinates of the center of the corresponding cluster are added to the coordinates of the displaced representation. The proposed method was tested on the fashion-mnist-784-euclidean and NYT-256-angular datasets. A comparison has been made of compressed vector representations with loss of accuracy by reducing the bit depth with vector representations compressed using the proposed method. With a slight (around 10 %) increase in the size of the compressed data, the absolute value of the error from loss of accuracy decreased by four and two times, respectively, for the tested sets. The developed method can be applied in tasks where it is necessary to store and process vector representations of multimodal documents, for example, in the development of search engines.
ARTIFICIAL INTELLIGENCE AND COGNITIVE INFORMATION TECHNOLOGIES
Monocular depth estimation for 2D mapping of simulated environmentsBarhoum Majd, Pyrkin Anton Alexandrovich
This article addresses the problem of constructing maps for 2D simulated environments. An algorithm based on monocular depth estimation is proposed achieving comparable accuracy to methods utilizing expensive sensors such as RGBD cameras and LIDARs. To solve the problem, we employ a multi-stage approach. First, a neural network predicts a relative disparity map from an RGB flow provided by RGBD camera. Using depth measurements from the same camera, two parameters are estimated that connect the relative and absolute displacement maps in the form of a linear regression relation. Based on a simpler RGB camera, by applying a neural network and estimates of scaling parameters, an estimate of the absolute displacement map is formed, which allows to obtain an estimate of the depth map. Thus, a virtual scanner has been designed providing Cartographer SLAM with depth information for environment mapping. The proposed algorithm was evaluated on a ROS 2.0 simulation of a simple mobile robot. It achieves faster depth prediction compared to other depth estimation algorithms. Furthermore, maps generated by our approach demonstrated a high overlap ratio with those obtained using an ideal RGBD camera. The proposed algorithm can find applicability in crucial tasks for mobile robots, like obstacle avoidance, and path planning. Moreover, it can be used to generate accurate cost maps, enhancing safety and adaptability in mobile robot navigation.
Segmentation of muscle tissue in computed tomography images at the level of the L3 vertebraTeplyakova Anastasia R., Shershnev Roman V. , Starkov Sergey O. , Agababian Tatev A. , Kukarskaya Valeria A.
With the increasing routine workload on radiologists associated with the need to analyze large numbers of images, there is a need to automate part of the analysis process. Sarcopenia is a condition in which there is a loss of muscle mass. To diagnose sarcopenia, computed tomography is most often used, from the images of which the volume of muscle tissue can be assessed. The first stage of the analysis is its contouring, which is performed manually, takes a long time and is not always performed with sufficient quality affecting the accuracy of estimates and, as a result, the patient’s treatment plan. The subject of the study is the use of computer vision approaches for accurate segmentation of muscle tissue from computed tomography images for the purpose of sarcometry. The purpose of the study is to develop an approach to solving the problem of segmentation of collected and annotated images. An approach is presented that includes the stages of image pre-processing, segmentation using neural networks of the U-Net family, and post-processing. In total, 63 different configurations of the approach are considered, which differ in terms of data supplied to the input models and model architectures. The influence of the proposed method of post-processing the resulting binary masks on the segmentation accuracy is also evaluated. The approach, which includes pre-processing with table masking and anisotropic diffusion filtering, segmentation with an Inception U-Net architecture model, and post-processing based on contour analysis, achieves a Dice similarity coefficient of 0.9379 and Intersection over Union of 0.8824. Nine other configurations, the experimental results for which are reflected in the article, also demonstrated high values of these metrics (in the ranges of 0.9356–0.9374 and 0.8794–0.8822, respectively). The approach proposed in the article based on preprocessed three-channel images allows us to achieve metrics of 0.9364 and 0.8802, respectively, using the lightweight U-Net segmentation model. In accordance with the described approach, a software module was implemented in Python. The results of the study confirm the feasibility of using computer vision to assess muscle tissue parameters. The developed module can be used to reduce the routine workload on radiologists.
MODELING AND SIMULATION
Providing operating modes for Coriolis vibration gyroscopes with low-Q resonatorsMatveev Valery V. , Likhosherst Vladimir V., Kalikanov Alexey V. , Pogorelov Maxim G., Kirsanov Maxim D. , Telukhin Sergey V.
Coriolis vibration gyroscopes are a class of promising inertial primary information sensors that respond to the rotation of the resonator base through Coriolis inertial forces arising in the vibrating shell. Currently, two directions for the production of resonators for such gyroscopes have been developed: from quartz glass, a material with extremely low internal friction, and based on the processing of a metal alloy. When using the first direction, thanks to the high quality factor of quartz, it is possible to create navigation-class integrating gyroscopes. Existing samples of Coriolis vibration gyroscopes with metal resonators, as a rule, are angular velocity sensors. The problem of creating an integrating mode of a gyroscope with a metal resonator is associated with the low quality factor of metal alloys which usually does not exceed 35,000. With this value of quality factor, the duration of operation of the gyroscope in the angular deviation sensor mode will be several seconds. The paper presents methods for ensuring the functioning of Coriolis vibration gyroscopes, including the integrating gyroscope mode. A mathematical description of Coriolis vibration gyroscopes with a cylindrical cavity resonator is given based on the dynamic model of Dr. D. Lynch using the method of envelope amplitudes of oscillations. The mathematical model is supplemented with corrections that provide compensation for the dissipation of the resonator oscillations energy to implement the integrating mode of the gyroscope. The conditions for complete compensation of vibration energy dissipation are shown. A description of methods for exciting a standing wave in a resonator using periodic forcing and by creating self-oscillations is presented. It is shown that the duration of the transient excitation process is determined by the time constant of the resonator. The results of experimental studies of Coriolis vibration gyroscopes with a low-Q metal resonator are presented confirming the possibility of implementing an integrating mode of operation of the gyroscope. The initial excitation of the resonator oscillations is carried out by a self-oscillating circuit. According to the results of experimental studies, the quality factor of the metal resonator was increased by a factor of 17. The operating time of Coriolis vibrating gyroscopes has been equally increased. The possibility of constructing Coriolis vibration gyroscopes in the integrating gyroscope mode based on a low-Q metal resonator has been shown theoretically and experimentally. The solution to this problem was based on a circuitry method for increasing the quality factor. In principle, the quality factor of the resonator can be significantly increased compared to the figure achieved in the experiment. This will ensure a longer operating time of Coriolis vibration gyroscopes in the integrating mode.
Collection and processing of environmental information in oil and gas production areas and solving other applied problems using active search methods (Review article)Svitnev Igor V. , Naydanov Alexander F. , Vilkov Alexey V. , Sokolov Dmitry A. , Lebedev Mikhail Yu. , Elena A. Kharitonova , Lyudmila A. Lukyanova
The methods of monitoring the environmental situation as well as the problems of solving related applied environmental and resource problems in hard-to-reach areas of oil and gas production and also in other sectors of the national economy using unmanned aerial vehicles, are investigated. The methods of studying the types and thicknesses of the layers of the underlying surface by probing them with electromagnetic pulses of the radiofrequency range and gamma radiation are considered. Based on the existing theoretical dependencies of the interaction of electromagnetic radiation with the Earth’s surface, diagrams of the passage of electromagnetic waves in the decimeter and centimeter ranges through various landscape structures (snow-ice-water-frozen soil) are presented. It is shown that the use of gamma radiation makes it possible to solve the problem of determining the effective altitude of an aircraft during environmental monitoring due to the high energy of photon radiation and albedo from various surfaces including snow cover. A method for calculating the pollutant content on the underlying surface with a given probability of its reliable detection is presented. It is noted that the reliability of the readings of measuring instruments is significantly influenced by their geometric location on the transport platform. It is shown that the proposed solution is advisable to implement using two unmanned aerial vehicles or as mall-sized unmanned airship. Based on the review, the composition of the technical means of the complex for recognizing the types and thicknesses of layers of contamination of the underlying surface is proposed. A possible methodology for assessing the environmental situation is presented. The results of the work can be used in conducting environmental exploration of infrastructure used for transporting oil and gas resources in conditions of difficult access to it as well as for solving similar military-applied and engineering-construction tasks. At the same time, for the first time, the joint use of the radio frequency range of electromagnetic waves and gamma radiation was proposed. The radio frequency range makes it possible to study the structure of the landscape, and gamma radiation from backscattered ionizing radiation is a type of pollutant, as well as to ensure high accuracy in measuring the distance from the module to the upper layer of the underlying surface.
Using machine learning technologies to solve the problem of classifying infrasound background monitoring signalsFrolov Ivan N., Nikolai G. Kudryavtsev, Safonova Varvara Yu. , Kudin Dmitry V.
It is widely known that among sound signals generated by natural and anthropogenic phenomena, the most long-lived are waves of frequency less than 20 Hz, called infrasound. This property allows tracking at a distance by infrasound monitoring the occurrence of high-energy events on regional scales (up to 200–300 km). At the same time, the separation of useful infrasound signals from background noise is a non-trivial task in real-time and post-facto signal processing. In this paper we propose a new method for classification of specific signals in infrasound monitoring data using Shannon permutation entropy and vectors of frequency distribution of occurrence frequencies of permutations of consecutive sample values of rank 3 (number of permutation elements). To evaluate the validity of the proposed entropy-based classification method, two machine learning methods — random forest method and classical neural network approach — implemented in Python language using Scikit-lean, TensorFlow and Keras libraries were used. The classification quality was evaluated against the traditional frequency-based method of class extraction based on Fourier transform. Recognition was performed on the prepared infrasound monitoring data in the Altai Republic. The results of computational experiment on the separation of 5 classes of signals showed that classification by the proposed method gives the same results of recognition by neural network with in comparison with frequency classification of the original data; the recognition accuracy was 51–58 %. For the random forests method, the recognition accuracy of frequency classes was slightly higher: 51 % vs. 45 % for classes using the permutation entropy method. The analysis of the results of the computational experiment shows sufficient competitiveness of the method of classification by permutation entropy in the recognition of infrasound signals. In addition, the proposed method is much easier to implement for inline signal processing in low- consumption microcontroller systems. The next step is to test the method at infrasound signal registration points and as part of the infrasound monitoring data processing system for real-time event detection.
Study of the influence of the optical fiber output end shape on hydroacoustic processes in a liquid stimulated by microsecond pulses of Yb,Er:Glass laser radiationNasser Raed, Smirnov Sergey N.
The results of a study of hydroacoustic processes stimulated in a volume of distilled water by powerful microsecond pulses of Yb,Er:Glass laser radiation delivered through optical fibers with two different shapes of the output end are presented. A comparison of the volume of the steam-gas cavity formed in the liquid and the pressure drops that occur at the moment of the laser pulse action and in the “collapse-rebound” phase of the vapor-gas cavity is presented. The results obtained are useful for the development of technologies for laser endosurgical interventions that require effective destruction of pathological biological tissues, for example, laser cataract extraction.