Menu
Publications
2025
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
Editor-in-Chief

Nikiforov
Vladimir O.
D.Sc., Prof.
Partners
Summaries of the Issue
REVIEW ARTICLE
Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)
Denis N. Biryukov, Andrey S. Dudkin 373
The issues of trust in decisions made (formed) by intelligent systems are becoming more and more relevant. A systematic review of Explicable Artificial Intelligence (XAI) methods and tools aimed at bridging the gap between the complexity of neural networks and the need for interpretability of results for end users is presented. A theoretical analysis of the differences between explainability and interpretability in the context of artificial intelligence as well as their role in ensuring the security of decisions made by intelligent systems is carried out. It is shown that explainability implies the ability of a system to generate justifications understandable to humans, whereas interpretability focuses on the passive clarity of internal mechanisms. A classification of XAI methods is proposed based on their approach (preliminary/subsequent analysis: ante hoc/post hoc) and the scale of explanations (local/global). Popular tools, such as Local Interpretable Model Agnostic Explanations, Shapley Values, and integrated gradients, are considered, with an assessment of their strengths and limitations of applicability. Practical recommendations are given on the choice of methods for various fields and scenarios. The architecture of an intelligent system based on the V.K. Finn model and adapted to modern requirements for ensuring “transparency” of solutions, where the key components are the information environment, the problem solver and the intelligent interface, are discussed. The problem of a compromise between the accuracy of models and their explainability is considered: transparent models (“glass boxes”, for example, decision trees) are inferior in performance to deep neural networks, but provide greater certainty of decision-making. Examples of methods and software packages for explaining and interpreting machine learning data and models are provided. It is shown that the development of XAI is associated with the integration of neuro-symbolic approaches combining deep learning capabilities with logical interpretability.
OPTICAL ENGINEERING
Intensification of sol-gel synthesis of Mn-containing MgO-Al2O3-ZrO2-SiO2 system materials
Sergei K. Evstropiev, Valentina L. Stolyarova, Bulyga Dmitry V. , Artem S. Saratovskii, Nikolay B. Knyazyan, Goharik G. Manukyan 387
Glass and glass-crystalline MgO-Al2O3-SiO2 system materials have many practical applications including their use as luminophores. To lower the synthesis temperature of such materials is an actual task. In this work, Mn-containing materials of MgO-Al2O3-ZrO2-SiO2 system were synthesized by sol-gel method. The analytical chemical composition, crystal structure, morphology and luminescence spectra were investigated by X-ray phase analysis, scanning electron microscopy, energy dispersive analysis and luminescence spectroscopy. It was found that the introduction of fluoride component into sols significantly accelerates the crystallization of Mn-containing gels during their heat treatment and has a significant effect on the morphology of xerogels. Fluorides play the role of additional nucleation centers and ensure the formation of numerous small oxide crystals. Energy dispersive analysis showed that fluoride is completely removed from the structure of materials during heat treatment of gels up to 900 °C. According to the data of X-ray phase analysis, the introduction of manganese ions into the structure of forming oxide crystals and deformation of their crystal lattice occurs at the initial stages of the crystallization process. Emission bands of both manganese ions and structural defects formed in the crystal lattice of oxide crystals are observed in the photoluminescence spectra of xerogels. It was shown that in addition to using the sol-gel method, which is a well-known approach, the addition of fluorine-containing precursor significantly accelerates crystallization of gels of MgO-Al2O3-ZrO2-SiO2 system, promotes formation of dispersed structure of materials, increases intensity, and improves resolution of emission bands in luminescence spectra.
MATERIAL SCIENCE AND NANOTECHNOLOGIES
Conformational properties of polymer brushes with aggrecan-like macromolecules under strong stretching conditions on a cubic lattice
Lukiev Ivan V. , Mikhailov Ivan V. , Oleg V. Borisov 396
The comb-like polymers are used to modify various surfaces due to their branched structure and a number of unique physical and chemical properties. With a sufficiently dense grafting, the macromolecules form a homogeneous polymer brush that completely covers the surface to be modified. Comb-like polymer brushes find applications as biomedical coatings, lubricants, sensors, targeted drug delivery systems, and many others. Given the wide demand for comb- like polymer coatings, it is of practical importance to predict their conformational properties as a function of the architecture of the grafted polymers. Сomb-like polymer brushes have been reasonably well studied both theoretically and experimentally at low grafting densities. However, there are no analytical models that quantitatively describe the properties of these brushes under conditions of high grafting densities and near-limit stretching of the macromolecular backbones. To study the conformational properties of planar polymer brushes made of comb-like polymers, two complementary approaches have been applied: analytical and numerical methods of the self-consistent field. The former was used for analytical description of the volume fraction profile of monomeric units of grafted macromolecules under their stretching on a body-centered cubic lattice, and the latter was used for validation of the proposed analytical model by comparing its results with the numerical calculation data on a simple cubic lattice. A universal analytical formula has been obtained that describes the profile of the volume fraction of monomeric units of grafted comb-like macromolecules in a wide range of grafting density values under conditions of athermal low-molecular-weight solvent. The study proceeded with the quantitative estimation of the average thickness of polymer brushes and the average density of monomeric units at different effective grafting densities of comb-like polymers. This was achieved by determining the ratio of the actual grafting density to the maximum possible grafting density of macromolecules with a given architecture as well as at different branching of these macromolecules. It has been demonstrated that, under conditions of athermal solvent, there is an increase in the average thickness of the polymer brush and a decrease in the average density of monomer units, as the branching degree of grafted macromolecules increases at a fixed grafting density and contour length of the main chain of macromolecules. Furthermore, at elevated levels of branching in grafted chains, the observed dependence of the average density on the effective grafting density approaches a linear relationship. The proposed analytical stretching model on a body-centered cubic lattice showed high agreement with the data obtained by numerical simulation on a simple cubic lattice. The findings of this study provide a foundation for predicting the conformational properties of polymer brushes under conditions of high grafting density and the degree of branching of grafted comb-like macromolecules.
Atmospheric air-phase singlet oxygen generator for practical multifunctional applications
Larisa L. Khomutinnikova, Egor P. Bykov, Plyastsov Semyon A, Sergei K. Evstropiev, Meshkovsky Igor K., Sergei G. Zhuravskii, Vladimir N. Baushev 406
Singlet oxygen is a metastable reactive oxygen species involved in numerous biochemical reactions and physiological processes. This suggests its potential applicability in addressing practical challenges in medicine and human safety. Due to its oxidative properties, singlet oxygen effectively eliminates pathogenic organisms, including bacteria, fungi, and viruses, and is utilized in photodynamic therapy for the treatment of various diseases, including oncological and dermatological pathologies. Traditionally, photosensitizers are employed for its generation; however, they exhibit significant drawbacks, such as toxicity, low selectivity toward affected cells, and the requirement for high-intensity optical radiation. One promising solution involves the use of photocatalytic materials capable of generating singlet oxygen in both liquid and gaseous phases. The lifetime of singlet oxygen molecules in the gas phase is substantially longer than in liquids. Investigating methods for generating singlet oxygen in the gas phase represents a pressing scientific challenge. Currently, there is a lack of publications in scientific literature describing the qualitative and quantitative characteristics of air mixtures enriched with reactive oxygen species. The development of singlet oxygen generators in the gas phase of atmospheric air is an urgent task with multiple functional applications in medicine and safety technologies. This study presents and examines an experimental prototype of a device designed for generating singlet oxygen in the gas phase of atmospheric air. The design incorporates the authors’ research on the development of an original photocatalytic nanocrystalline coating based on ZnO-SnO2-Fe2O3, capable of producing singlet oxygen under irradiation with optical radiation near the visible spectrum (405 nm). A novel device model has been developed, featuring a reusable photocatalyst. The materials were characterized using X-ray diffraction analysis and atomic force microscopy. Singlet oxygen generation activity was assessed via electron paramagnetic resonance spectroscopy. The achieved photogeneration rate of singlet oxygen was 100 (μmol/L)/min. The calculated concentration of singlet oxygen in the air at the device outlet under normal conditions, determined based on the photodegradation rate of rhodamine 6G dye in porous glass, reached 10 (μmol/L)/min. The presented prototype exhibits low energy consumption, environmental safety, cost-effective materials, utilization of near-visible spectrum radiation, and the ability to generate singlet oxygen without toxic oxidizing byproducts. The developed prototype allows for the creation of multiple modifications, enabling a range of multifunctional devices for individual or group therapeutic use as well as for engineering solutions aimed at ensuring a safe living environment. Selective singlet oxygen generation permits the application of these devices in medical settings, both for direct tissue contact and for establishing breathable air environments conducive to human life.
COMPUTER SCIENCE
Two-stage algorithm for underwater image recovery for marine exploration
Ivan V. Semernik, Christina V. Samonova 417
The paper explores the problems of restoring underwater images exposed to distortions in the form of color and contrast deformations, the presence of haze, etc., arising from the interaction of optical radiation with the aquatic environment. Restoring underwater images is a non-trivial task due to the large variability of the parameters of the aquatic environment and photography conditions. The proposed method, unlike other underwater image recovery algorithms based on an imaging model, is not based on a simplified exponential Beer-Lambert law for estimating optical radiation attenuation in water, but on a more accurate physical approach that simulates the propagation of optical rays in water using the Monte Carlo method, taking into account the main parameters the water environment and the camera. The results of numerical simulation of optical ray propagation in an aquatic environment are used for image processing in the spatial domain by editing the histograms of each image channel in the RGB color space. To test the developed algorithm, 6 real underwater images were selected obtained under various lighting conditions (natural and artificial) and various parameters of the aquatic environment (clear ocean and turbid coastal water). For the purpose of qualitative and quantitative analysis of the obtained results, the following similar underwater image processing methods were used: Fusion, UDCP IATP, Retinex, HE, and UWB VCSE. The Underwater Colour Image Quality Evaluation Metric (UCIQE) and Underwater Image Quality Measure (UIQM) indicators were used to quantify the results obtained. The results of the qualitative assessment demonstrate the high efficiency of the proposed method: regardless of the conditions of the initial image parameters, the application of the developed method improves visual perception and does not lead to excessive contrast enhancement, color distortion, loss of detail, the appearance of artifacts, etc. Quantification of underwater image processing results demonstrates comparable and superior results when comparing the efficiency of the algorithm with similar methods. For the UCIQE parameter, the developed method provided an improvement from 9 % to 51 % relative to the parameter value for the original image, while similar methods demonstrated results from minus 10 % to 82 %. For the UIQM parameter, the developed method provided an improvement from 24 % to 99 % relative to the parameter value for the original image, while similar methods demonstrated results from minus 10 % to 123 %. Unlike analogues, the developed method did not demonstrate the worst values of the UCIQE and UIQM parameters for any processed image, which indicates the stability of the method regardless of the parameters of the aquatic environment and shooting conditions. By dividing the developed method into preliminary and main stages, high image processing speed is ensured: 0.073 seconds for images with a resolution of 400 × 300 pixels and from 8.02 to 8.23 seconds for images with a resolution of 5184 × 3456 pixels. Similar methods demonstrated values from 0.19 to 10.81 seconds for an image with a resolution of 400 × 300 pixels and from 7.65 to 937.83 seconds for an image with a resolution of 5184 × 3456 pixels. The introduction of the proposed method into the geological exploration will increase their efficiency and reliability, and will provide more accurate data for further exploration of solid mineral deposits. Such technique integrated into the machine vision system of underwater vehicles will significantly expand their functionality by enabling automation of operations and improving the efficiency of recognition systems.
Analysis of the cryptographic strength of the SHA-256 hash function using the SAT approach
Davydov Vadim Valerievich, Michail D. Pikhtovnikov, Anastasia P. Kiryanova, Oleg S. Zaikin 428
Cryptographic hash functions play a significant role in modern information security systems by ensuring data integrity and enabling efficient data compression. One of the most important and widely used cryptographic hash functions is SHA-256 that belongs to the SHA-2 family. In this regard, the study of SHA-256 cryptographic resistance using modern cryptanalysis approaches to preimage and collision attacks with an emphasis on the practical feasibility of such attacks is an urgent scientific task. To search for preimages of round-reduced versions of the SHA-256 compression function, the logical cryptanalysis was applied, i.e., cryptanalysis problems were reduced to the Boolean satisfiability problem (SAT). For collision attacks, a combination of logical and differential cryptanalysis was utilized. The work presents a comparison between various methods for reducing the SHA-256 compression function to SAT and evaluates their efficiency. As a result of the work, preimages for 17- and 18-round SHA-256 compression functions were found for the first time as well as preimages for a weakened 19-round compression function. Basic differential paths were constructed, which facilitated faster finding of collisions for the 18-round compression function. Known differential paths were reduced in SAT leading to finding collisions for the 19-round compression function. The work demonstrates the possibility of combining two cryptanalysis methods to enhance the efficiency of analyzing cryptographic algorithms. The results of the study confirm that the full-round SHA-256 hash function remains resistant to preimage and collision attacks within the scope of the applied SAT-based approach.
Investigation of the possibility of using evolutionary algorithms for conditional generation of attributed graphs
Irina Yu. Deeva, Polina O. Andreeva, Egor N. Shikov, Anna V. Kaluzhnaya 438
The field of synthetic generation of attributed graphs is actively developing due to advances in generative modeling. However, a key problem of current methods remains the limited diversity of synthesized graphs, due to the dependence on the characteristics of real data used to train generative models. This is a problem because topological properties of graphs and statistical characteristics of attributes critically affect the performance of graph-based machine learning models. In this paper, we test the hypothesis that a combination of evolutionary algorithms and Bayesian networks can provide flexible control over the generation of both graph topology and attributes. The proposed approach includes two key components: evolutionary algorithms to control topological characteristics of the graph (e.g. average vertex degree, clustering coefficient) and Bayesian networks to generate attributes with given statistical parameters such as assortativity or average correlation between attributes. The method allows explicitly setting constraints on graph properties, providing variability independent of the original data. Experiments confirmed that the approach can generate attributed graphs with a wide range of topological characteristics and given statistical parameters of the attributes with sufficiently low generation error. The results demonstrate the promising use of evolutionary and Bayesian methods for conditional graph generation. The main advantage of the approach is the ability to decompose the problem into independent control of topology and attributes, which opens new possibilities for testing machine learning algorithms under controlled conditions. A limitation is the computational complexity of evolutionary optimization, which requires further work to optimize the algorithm. In the future, the method can be extended to generate dynamic graphs and integrate with deep generative models.
Analysis of the applicability of existing secret separation schemes in the post-quaternary era
Elizar F. Kustov, Bezzateev Sergey V 446
Modern approaches to secret sharing have been examined, encompassing both classical and post-quantum cryptographic schemes. The study explores methods for distributing secret information among multiple participants using various mathematical primitives, such as Lagrange and Newton polynomials, the Chinese remainder theorem, error-correcting codes, lattice theory, elliptic curve isogenies, multivariate equations, and hash functions. A comparative analysis of different schemes is provided in terms of their resistance to quantum attacks, efficiency, and compliance with Shamir’s criteria. Special attention is given to assessing the schemes resilience against attacks using quantum computers, which is particularly relevant given the advancement of quantum technologies. The advantages and disadvantages of each scheme are discussed, including their computational complexity, flexibility, and adaptability to various conditions. It is shown that classical schemes, such as those by Shamir and Newton, remain efficient and easy to implement but are vulnerable to quantum attacks. Meanwhile, post-quantum schemes based on lattice theory demonstrate a high level of security but require more complex computations.
Deep learning-enhanced contour interpolation techniques for 3D carotid vessel wall segmentation
Nouar Ismail, Alexandra S. Vatyan, Tatyana A. Polevaya, Golubev Alexander A. , Dobrenko Dmitriy A., Zubanenko Aleksei A. , Gusarova Natalya Fedorovna, Almaz G. Vanyurkin, Mikhail A. Chernyavskiy 457
When studying human vessels using the contour interpolation method, there is a problem of insufficient data for training neural networks for automatic segmentation of the carotid artery wall. In this paper, automated methods of contour interpolation are proposed to expand the datasets, which allows for improved segmentation of vessel walls and atherosclerotic plaques. In this study, the performance of various interpolation methods is compared with the traditional nearest neighboring technique. A theoretical description and comparative evaluation of Linear, Polar, and Spline interpolation are presented. Quantitative metrics, including the Dice Similarity Coefficient, area and index differences, and normalized Hausdorff distances, are used to evaluate the performance of the methods. Performance evaluations are performed on various vessel morphologies for both the lumen and the outer wall boundaries. The study showed that Linear interpolation achieves better geometric performance (Cohen’s Kappa 0.92) and improved neural network performance (Score 0.86) compared to the State-of-the-Art model. The proposed interpolation methods consistently outperform nearest neighbor interpolation. Polar and spline methods are effective in generating anatomically plausible contours with improved smoothness and continuity, eliminating transition artifacts between slices. Statistical analysis confirmed good agreement and reduced variation of these methods. The results of the study are useful for the development of automated tools for assessing atherosclerotic plaque in carotid arteries, which is important for stroke prevention. Implementation of improved interpolation methods into clinical imaging workflows can significantly improve the reliability, accuracy, and clinical utility of vessel wall segmentation.
Detecting fraud activities in financial transactions using SMOTENN model
Irfan Syamsuddin
, Sirajuddin Omsa, Andi Rustam, Dahsan Hasan 466
The financial industry plays an important role in national economic growth. Because of their critical function, banks have become prime targets for numerous financial crimes. Among these, fraudulent financial transactions are regarded as a severe issue in the financial industry. Conventional approaches are frequently criticized for being ineffective in dealing with fraud in finance; therefore, machine learning approaches have a potential answer to deal with this problem. The goal of this research is to introduce a novel SMOTENN model to establish early detection of cyber fraudulent activities in financial transactions accurately. Two methods are used in this study: first, the Neural Network algorithm is applied to a dataset that contains unbalanced classes; second, the dataset is balanced using the SMOTE (Synthetic Minority Over-sampling Technique) algorithm first, followed by the Neural Network algorithm which we refer to as SMOTENN. The both models are assessed using evaluation metrics of Area Under the Curve, F1-score, precision, recall, specificity, accuracy, and processing time. The comparative analysis shows that the performance of the new SMOTENN model with a balanced dataset is significantly better than that of the neural network approach with an imbalanced dataset, implying that the new SMOTENN model is effective in detecting fraud activities in financial transactions.
A deep learning approach for adaptive electrocardiogram-based authentication in an internet of things enabled telehealth system
Mohamed Abdalla Elsayed Azab 475
As telehealth services have become integral to healthcare applications; robust authentication mechanisms are critical for safeguarding sensitive patient data and services. Conventional authentication techniques including passwords and tokens are susceptible to theft and security breaches. This vulnerability highlights the need for alternative methods that offer improved security measures and ease of use. Biometric authentication, which leverages unique physical and behavioral traits, has emerged as a promising alternative. Among various biometric modalities, electrocardiogram (ECG) signals stand out because of their uniqueness, stability, and noninvasive nature. This study introduces an innovative deep- learning-based authentication system that utilizes ECG signals to enhance security in Internet of Things (IoT)-powered telehealth environments. The proposed model employs hybrid architecture, starting with a Siamese Neural Network (SNN) for dynamic verification, followed by a Convolutional Neural Network (CNN) for feature extraction, utilizing an optimized Sequential Beat Aggregation approach for robust ECG-based authentication. The system operates securely and adaptively, and performs real-time authentication without requiring human intervention. The research approach involved the acquisition and processing of electrocardiogram data from the ECG-ID dataset which encompassed 310 ECG individuals obtained from 90 individual subjects. This dataset provided a comprehensive set of samples for training and evaluation. The model achieved high authentication accuracy (98.5 %–99.5 %) and a false acceptance rate of 0.1 % with minimal computational overhead, validating its feasibility for real-time applications. This study integrates ECG- based authentication into telehealth systems, creating a secure foundation for safeguarding patient data. The innovative use of ECG signals advances secure and adaptable for a personalized remote health monitoring system development.
MODELING AND SIMULATION
Method for identifying the active module in biological graphs with multi-component vertex weights
Dmitrii A. Usoltsev, Molotkov Ivan I., Artomov Mykyta N., Sergushichev Alexey A. , Shalyto Anatoly A. 487
An active module in biological graphs is a connected subgraph whose vertices share a common biological function. To identify an active module, one must first construct a weighted biological graph. The weight of each vertex is calculated based on biological experiments investigating the target biological function. However, the results of a single experiment may not fully describe the desired active module, covering only part of it and potentially introducing uncertainty into the vertex weights. This work demonstrates that employing Fisher’s method to integrate data from multiple experiments followed by applying a Markov chain Monte Carlo (MCMC) and machine learning–based approach to the results of Fisher’s method, enables more effective identification of active modules in biological graphs. The study utilizes the InWebIM protein–protein interaction graph, a human brain reconstruction graph from the BigBrain project, and a gene graph for the organism Caenorhabditis elegans. To combine the results of several experiments into a single outcome within one graph, Fisher’s method is applied. Afterwards, the search for active modules is conducted using an MCMC and machine learning-based method. To validate the proposed method on real data, results from Genome- Wide Association Studies on schizophrenia and smoking are used, along with the gene expression matrix of patients with skin melanoma from the TCGA project. Applying Fisher’s method makes it possible to consider the results of multiple biological experiments simultaneously. Subsequent use of the MCMC and machine learning–based method improves the accuracy of identifying active modules compared to ranking graph vertices solely by Fisher’s method. Considering the results of multiple biological experiments when determining active modules plays a crucial role in increasing the accuracy of identifying the vertices of the active module. This, in turn, promotes a deeper understanding of the biological mechanisms of diseases, which can be of great significance for the development of new diagnostic and therapeutic methods.
Modeling of nonlocal porous functionally graded nanobeams under moving loads
Ridha A. Ahmed, Wael N. Abdullah, Nadhim M. Faleh, Mamoon A. Al-Jaafari 498
This study focuses on the dynamic response of porous functionally graded nanomaterials to moving loads. The analysis was performed using two approaches: the Ritz method with the help of the benefits achieved by employing Chebyshev polynomials in the cosine form and the differential quadrature method with further inverse Laplace transformation. Both approaches utilize the formulation of a nano-thin beam considering an improved higher-order beam model and nonlocal strain gradient theory with two characteristic length scales, referred to as nonlocality and strain gradient length scales. Power-law dependencies steer the constituent designs of pore-graded materials toward pore factors that influence pore volume either with a uniform or non-uniform distribution of pores. Moreover, a variable scale modulus was adopted to further improve accuracy by considering the scale effects for graded nano-thin beams. The first part of the study addresses the equation of motion, which is solved by applying the Ritz technique with Chebyshev polynomials. In the second part, the governing equations for nanobeams are discussed where the differential quadrature method is used to discretise them further, and the inverse Laplace transform is used to obtain the dynamic deflections. The results of the present study elucidate the effects of the moving load speed, nonlocal strain gradient factors, porosity, pore number and distribution, and elastic medium on the dynamic deflection of functionally graded nanobeams.
Design of the microelectromechanical logic element based on a comb-drive resonator
Alexander A. Solovev, Evgeny F. Pevtsov, Vladimir A. Kolchuzhin 508
CMOS technology has nearly reached the physical limits of transistor scaling and exhibits significant operational limitations at extreme temperatures and ionizing radiation. This work proposes a methodology for designing logic elements based on an alternative technology utilizing comb-drive microelectromechanical resonators operating on a non-contact principle and reconfigurable during operation. A method is proposed for calculating the geometric parameters of the device using analytical expressions and considering technological norms necessary to achieve specified characteristics: the natural frequency of resonator oscillations (100 kHz) and the quality factor (20) at atmospheric pressure. Optimal geometric parameters of the device, characteristics of capacitive cells affecting the sensitivity of the device and the quality factor, taking into account air damping, are determined. The accuracy of the calculations is sufficient for designing photomasks without using specialized software. A compact model of a logic microelectromechanical element has been developed, allowing for system analysis of dynamic characteristics and implementation of a functionally complete set of logic operations. The developed design flow can be applied to create logic microelectromechanical elements with the possibility of reprogramming during operation and further cascading of such devices for constructing complex digital circuits. The article is useful for developers of microelectromechanical accelerometers and gyroscopes and proposes an alternative approach to creating three-dimensional models based on a library of parametric components and generating compact models for system analysis.
Critical loads of antisymmetric and mixed forms of buckling of a CCCC-nanoplate under biaxial compression
Mikhail V. Sukhoterin, Irina V. Voytko, Sosnovskaya Anna A. 520
The process of calculating the spectrum of critical loads of antisymmetric and mixed equilibrium forms after loss of stability of a contour-pinched highly elastic rectangular nanoplate (CCCC-plate) (С — clamped edge) under biaxial compression and various values of the nonlocal Eringen parameter is studied. The desired forms of supercritical equilibrium are represented by two hyperbolic-trigonometric series with indeterminate coefficients for corresponding combinations of odd and even functions. Each of the series obeyed the basic differential equation of the physical state of Eringen, and then their sum obeyed all the boundary conditions of the problem. As a result, an infinite homogeneous system of linear algebraic equations is obtained with respect to a single sequence of unknown coefficients of the series, containing as the main parameter the value of the compressive load. To find eigenvalues (critical loads), the iterative process of finding non-trivial solutions proposed by the authors in combination with the “shooting” method was used. For a number of values of the nonlocal parameter e0A [nm] from the operating range [0–2] of the Ehringen theory (0 is the classical theory) with a step of 0.25, a spectrum of 10 relative critical loads was obtained for the first time. It was found that with an increase in the nonlocal parameter critical loads decreased. No edge effects were detected. The accuracy of computer calculations was analyzed. The variable parameters of the computational program are the relative compressive load, the ratio of the sides of the plate, the values of the non-local Eringen parameter, the number of iterations, the number of members in the rows, the number of significant digits of the computational process. The proposed technique and the numerical results obtained can be used in the design of sensitive elements of various sensors in smart structures.
On the properties of compromise M-estimators optimizing weighted L2-norm of the influence function
Lisitsin Daniil V. , Gavrilov Konstantin V. 527
The paper develops a theory of M-estimators optimizing the weighted L2-norm of the influence function. The specified criterion of the estimation quality is quite general and, in addition, allows obtaining solutions related to the class of redescending estimators, i.e., possessing the property of stability to asymmetric contamination. Such estimators, in particular, were studied within the framework of the locally stable approach of A.M. Shurygin, based on the analysis of the estimator instability functional (L2-norm of the influence function), or his approach based on the model of a series of samples with random point contamination (point Bayesian contamination model). In this paper, a compromise family of estimators is studied for which the optimized functional is a convex linear combination of two basic criteria. The compromise family is similar to the conditionally optimal family of estimators proposed by A.M. Shurygin, but the criteria used can be squares of the weighted L2-norms of the influence function with arbitrary pre-known weight functions. The considered subject area has remained little-studied to date. In the course of the research, we used a theory we had developed earlier, which describes the properties of estimators that optimize the weighted L2-norm of the influence function. As a result of the study, a number of properties of compromise estimators were obtained, and the uniqueness of the family elements was shown. A family member that delivers equal values to the criteria was considered separately: it was shown that this estimator corresponds to the saddle point of the optimized functional, and is also a minimax solution with respect to the basic criteria on the set of all regular score functions. The constructed theory is illustrated using the example of the problem of mathematical expectation estimating of a normal distribution under conditions of targeted malicious influence on a data set (similar to a data poisoning attack in malicious machine learning).
Combined approach to fault detection in complex technical systems based on bond-graph model
Valentin A. Dmitriev, Marusina Maria Ya. 536
A new fault detection approach for complex technical systems has been developed and investigated, enabling the identification and classification of single and multiple simultaneous faults. The challenge of reliable and timely identification of both single and multiple simultaneous faults under conditions of limited access to labeled data has been addressed. A threat to the safe operation of autonomous equipment is a common challenge in the field operating conditions where traditional model-based or data-driven approaches, used individually, prove to be ineffective. This work presents a hybrid approach to fault detection. The proposed solution combines an analytical bond-graph model and a Convolutional Neural Network (CNN). The bond-graph generates residuals — the difference between values calculated based on the system physical laws and sensor measurements. The residuals are then analyzed by the CNN which is trained to detect and classify faults based on their characteristic features. Linear Fractional Transformation is employed to account for parameter uncertainties (e.g., resistance or capacitance). This approach allows combining a priori knowledge of the system physics with the capabilities of deep learning. The effectiveness of the approach was evaluated on a simulator of a hydraulic steering control system for autonomous equipment. Gaussian noise was added to the simulation to simulate real-world conditions. The experiments included incipient, abrupt, single, and multiple faults. Tests with varying amounts of training data, using sample sizes less than 128, demonstrated the higher effectiveness of the proposed hybrid approach compared to classical machine learning methods (such as Random Forest or K-Nearest Neighbors). A solution is proposed for fault detection in hydraulic control systems of autonomous equipment. The developed approach is particularly effective with limited data, making it suitable for field conditions. It allows for timely detection and classification of faults (e.g., valve leaks or solenoid valve failures), which reduces the risk of failures and ensures the safety of autonomous equipment. The results can be adapted and implemented for electrical, mechanical, and other complex technical systems.
Feature extraction methods for metagenome de Bruijn graphs collections based on samples classification information
Artem B. Ivanov, Shalyto Anatoly A., Ulyantsev Vladimir Igorevich 545
The paper considers the comparative analysis of metagenomic samples collections using de Bruijn graphs. We propose methods for automatic feature extraction based on the results of comparative sample analysis, expert metadata, and statistical tests to improve the accuracy of classification models. In this paper features are connected subgraphs of the de Bruijn graph. The first method, named unique_kmers, is used to extract strings of length k (k-mers) that occur only in samples of the certain class. The second method, named stats_kmers, is used to extract k-mers whose frequency of occurrence statistically differs between sample classes. To extract interpretable features, a third method has been developed that implements the extraction of subgraphs from de Bruijn graphs based on the selected nodes obtained as a result of applying one of the first two methods. Data analysis consists of two stages: firstly, unique_kmers or stats_kmers method is applied for data preprocessing, secondly, the third method is applied to obtain interpretable features. The methods were tested on four generated datasets that model the properties of real metagenomic communities such as the presence of similar species (strains) or differences in the relative abundance of bacteria. The developed methods were used to extract features. Machine learning model was trained in extracted features to classify samples from the test datasets. For comparison, the results of taxonomic annotation of samples using the Kraken2 program were used as features. It was shown that the accuracy of samples classification increased when using features obtained using the proposed methods in classification models compared to classification models trained on taxonomic features. The developed methods are useful for comparative analysis of metagenomic sequencing data and can form the basis of decision support systems, for example, in human diseases diagnostics based on gut microbiota sequencing data.
Automatic calibration of the receiving line of information and control systems in real time
Nguyen Trong Nhan, Xuan Luong Nguyen, Phung Bao Nguyen 554
In this paper, the novel methodology for real-time automatic calibration of digital transceiver modules in the receiving path of information and control systems is presented. This methodology is grounded in the formation of calibration coefficients through a comparison between the complex signal amplitude at the output of the receiving path of the “virtual” reference module and the complex signal amplitude at the output of the receiving path following signal accumulation. The calibration value for each receiving path output complex signal amplitude is determined by multiplying the output complex signal amplitude by its corresponding calibration coefficient. The gain pattern of the information and control system is synthesized by calculating the weighted sum of the calibrated output complex signal amplitudes across all receiving paths, thereby maximizing the peak gain and minimizing side lobe levels. Simulations and experimental analyses were performed on an information and control system operating in the L-band to validate the proposed methodology. The results indicated a reduction in amplitude errors to 3.79 dB and a decrease in phase errors to 5°40ʹ12ʺ. The proposed methodology meets the requirements for synthesizing a self-calibrating subsystem model employing a soft configuration approach.
Model for storing spatial data of tensor geophysical fields
Vorobeva Gulnara R. , Vorobev Andrei V. , Gleb O. Orlov 565
It is known that geophysical fields (geomagnetic, gravitational and electromagnetic), when recorded or modeled, represent a set of several vector components characterizing the change in the corresponding parameters in space and time. Geophysical field data is currently stored based on known data models which usually have a relational structure. Analysis of known studies has shown the redundancy and inefficiency of this approach. This is reflected in the low speed of obtaining the desired data when using complex multi-predicate queries. The continuously growing volume and complexity of the data under consideration require new approaches to organizing their storage to improve the performance of information systems used to support decision-making based on geophysical field data. This paper proposes and examines a model for representing and storing geophysical field data that ensures increased performance of information systems. An analysis of specific features of geophysical fields due to their tensor nature is presented. The main data components are considered, promising options for combining known data models are determined to obtain the best result to improve the performance of the corresponding databases. A multi-axis model of geophysical field data is proposed that takes into account the tensor multi-component structure of the fields and combines the features of the hierarchical organization of data and element-centric information markup. A distinctive feature of the proposed model is the introduction of static and dynamic axes. This approach ensures the presentation of metadata, operational and archived data, and the interaction between them at the level of background processes with the participation of software triggers with temporal predicates. Using the example of geomagnetic field data and its variations, an increase in the speed of executing single- and multi-predicate queries for data selection and insertion of new records into the storage is demonstrated. Computational experiments comparing the proposed and known approaches to the organization and storage of geophysical field data on various sets and volumes of data showed that the implementation of the multi- axis data model allows increasing the speed of executing single-predicate queries by 25.7 %, multi-predicate queries by 20.1 %, and queries for inserting new records by 21.3 %. This allows us to conclude that the proposed solution is appropriate.
Boundary estimation of the reliability of cluster systems based on the decomposition of the Markov model with limited recovery of nodes with accumulated failures
Bogatyrev Vladimir A, Bogatyrev Stanislav Vladimirovich, Bogatyrev Anatoly Vladimirovich 574
The possibilities of a boundary assessment of the reliability of a cluster consisting of many nodes, each of which can be in a significant number of states, differing in the performance of the required functions and the average recovery time to a healthy node, are being investigated. Estimating the reliability of such a cluster system based on Markov processes is difficult at the stage of constructing a diagram of states and transitions due to its large dimension. The difficulty of building a model increases especially with limited node recovery, leading to a queue of nodes requiring recovery. The proposed approach allows us to overcome this difficulty. The differences between the proposed approaches are that it provides for the decomposition of the Markov cluster model and a step-by-step sequential refinement of the upper and lower boundary estimates of cluster reliability, taking into account the impact on slowing down the recovery of each cluster node of its other nodes. The peculiarity of the proposed approach is the decomposition of the model with the allocation of a certain individual cluster node and the construction of its Markov model with the introduction of waiting states for node recoveries due to queue maintenance for the restoration of other previously failed cluster nodes. Having determined the probabilities of all its states on the Markov model of the selected node, taking into account the identity of all cluster nodes, the average delays until the restoration of the serviceable state of the remaining cluster nodes with previous failures are determined based on the hypothesis enumeration formula. The calculated average delays are used in the next stage of calculating the Markov node model, specifying the delay in starting recovery of the allocated node due to the influence of the recovery queue of the remaining nodes in the cluster. Based on the proposed model, the availability coefficient of a cluster is estimated, consisting of a significant number of structurally complex nodes characterized by a variety of states of different performance and recovery time of the node to its initial working condition. As a result of decomposition, the proposed model makes it possible to overcome the problem of an avalanche-like increase in the complexity of the cluster model with an increase in the number of its nodes and the number of their states. The calculations performed have shown the convergence of the proposed boundary estimate of the reliability of a cluster of a significant number of structurally complex nodes. The results obtained can be used to assess the reliability and justify the choice of cluster structure as well as the disciplines of their maintenance and recovery when failures accumulate, taking into account limited recovery resources leading to the formation of queues of failed elements to be restored. The proposed model can be used to analyze the impact of the accumulation of failures in different cluster nodes on the delays in servicing the incoming request stream.