Summaries of the Issue

REVIEW PAPERS

Fluorescence studies of natural photosensitizers in oncology and antimicrobial therapy
Denis O. Evtifeev, Zyubin Andrey Yu. , Elizaveta A. Demishkevich, Samusev Ilia G.
223
The article provides an overview of current paper on the use of natural photosensitizers for photodynamic therapy and photodynamic inactivation of microorganisms. The existing photosensitizers with high selectivity, high singlet oxygen quantum yield and minimal dark toxicity are considered. It has been shown that natural compounds, such as curcumin, hypericin, riboflavin, berberine, chlorophyloids, psoralenes, and anthracyclines, are promising candidates for photodynamic therapy due to their biocompatibility and rich spectrum of photo and biochemical properties. Also, a review of promising rare and less studied photosensitizers was conducted. A generalized analysis of modern publications to date has been performed as well as an analysis of the authors’ experimental data on stationary, time-resolved fluorescence and microscopy (confocal and Fluorescence-lifetime imaging microscopy) of natural as well as their therapeutic and antimicrobial activity in vitro and in vivo. It has been shown that hypericin and perylene quinones achieve a quantum yield of singlet oxygen ≈ 0.5–0.6 at ε > 4·104 l·mol–1·cm–1, providing effective photodynamic therapy of tumors and a logarithmic decrease (6–7 lg CFU) bacterial load at moderate doses of light (less 20 J·cm–2). Curcumin and riboflavin combine the therapeutic effect with bright fluorescence, allowing optical monitoring in real time. Psoralenes implement an alternative mechanism for DNA crosslinking under a long-wavelength, high-energy radiation, which underlies therapy based both on psoralens and long-wavelength ultraviolet radiation and also blood disinfection. Complexing with lanthanide ions or upconversion nanoparticles expands the excitation spectrum to the near-infrared range and enhances the diagnostic signal. Thus, natural photosensitizers are evolving into a versatile platform for the simultaneous treatment and optical monitoring of oncological and infectious diseases, while their incorporation into nanostructures — including rareearthion–based systems — extends lightpenetration depth and enables precise visualization of deepseated tissues, paving the way for the clinical adoption of nextgeneration hybrid phototherapeutic technologies.
236
This paper presents a review of contemporary deep learning methods for processing remote photoplethysmography data. Architectures of convolutional neural networks, transformers, recurrent, and generative models are examined for video signal preprocessing and for extracting physiologically significant parameters under conditions involving artifacts caused by motion, illumination changes, or low video quality. An analysis of the prospects for implementing deep learning algorithms in real-world medical scenarios is conducted based on the proposed criteria, considering existing integration challenges, the demand for such solutions, and issues related to result validation. The study includes a review of existing deep learning approaches that utilize video signals to estimate imaging photoplethysmography. The methods are evaluated using newly proposed criteria, including the multidimensionality of the photoplethysmography output signal, the availability of open-source code, and the reporting of computational time costs, which is essential for their practical real-time application in medical institutions. It is shown that deep learning methods significantly outperform traditional approaches in physiological parameter estimation, cardiovascular disease diagnosis, and video signal preprocessing. However, most existing deep learning-based solutions are limited to one-dimensional output signals due to the complexity of obtaining multidimensional annotations required for supervised learning. Additional analysis revealed a lack of information regarding temporal and computational costs, which restricts the practical real-time implementation of these methods. The proposed systematization clarifies key terms related to photoplethysmography signal processing: contact photoplethysmography, imaging photoplethysmography, remote photoplethysmography, and photoplethysmographic imaging. Approaches to dataset collection are also described, considering the concepts of multidimensionality, multichannel, and multimodal signals. The results may be applied in the development of remote health monitoring systems, including medical and consumer devices. The review will be of interest to specialists in biomedical engineering, medical informatics, and developers of physiological signal analysis solutions.

OPTICAL ENGINEERING

250
The paper examines the influence of heat treatment conditions on the size of quantum dots of CsPbI3 perovskites formed in fluorophosphate glasses and studies their luminescent properties. Fluorophosphate glasses with CsPbI3 quantum dots were obtained by high-temperature synthesis from blend reagents followed by additional heat treatment above the glass transition temperature. The heat treatment temperature was determined on the basis of differential scanning calorimetry data using STA 449F1 Jupiter Nietzsche. Absorption spectra were obtained using a Perkin Elmer Lambda 650 double beam spectrophotometer. Photoluminescence spectra were obtained using a Perkin Elmer LS50B spectrofluorimeter. The absolute quantum yield was measured using a PhotoLuminescence (PL) absolute quantum yield measurement system (Hamamatsu) with an integrating sphere unit. Quantum dots of CsPbI3 were formed in fluorophosphate glass. The growth of quantum dots in glass was controlled by heat treatment at temperatures above glass-transition temperature Tg by adjusting the temperature and duration. Optical measurement data confirmed the formation of CsPbI3 nanocrystals of 6–15 nm in size. The PL of CsPbI3 quantum dots varied in the range of 625–705 nm. There is a non-monotonic variation of the quantum yield value depending on the heat treatment temperature. The maximum quantum yield of luminescence of pure CsPbI3 was 13 %. It is shown that the quantum yield of PL of CsPbI3 quantum dots with sizes of 6–15 nm weakly depends on the size of quantum dots and varies in the range of 10–13 %. It is concluded that fluorophosphate glasses with CsPbI3 quantum dots can be used as red phosphors.

MATERIAL SCIENCE AND NANOTECHNOLOGIES

258
There is increasing interest in research on glass pipettes with micro- and nanoscale outlets which are used for non-destructive morphology studies of native biological objects in liquids, biosensors, and 3D printing. The shape and size of pipettes have a decisive influence on their ionic conductivity and mechanical stability, which directly impacts the results of measurements using them. This study examines ionic conductivity with changes in the shape and size of pipettes produced under different formation conditions. The effect of nonlinear ion current conductivity on high-aspect-ratio nanopipettes with outlet sizes of about 100 nm or less was discovered and studied. Glass pipettes are formed by heating and subsequent axial stretching of the capillaries under mechanical load. The shape and size of the formed pipettes are determined using a scanning electron microscope. The pipette surface is coated with a thin layer of Au using magnetron sputtering to improve their visibility in the electron microscope. Ionic conductivity and pipette outlet diameter are measured using voltammetry. The dependence of ionic conductivity changes on the shape and size of glass pipettes was obtained by varying thermal pulling parameters. Thermal pulling parameters were determined that ensure the formation of conical and high-aspect-ratio nanopipettes with 100–200 nm outlet and 3–8° convergence angles at the apex, used in scanning capillary microscopy. Pipettes with 500–1000 nm outlet and 3–5° convergence angles, used in the patch-clamp method, were obtained. Cases of nonlinear conductivity with different Ion Current Rectification Coefficients, arising when using high-aspect-ratio nanopipettes with ionic conductivity resistances of approximately 50–100 MΩ, were studied. The obtained results will enable the formation of pipettes with a given conductivity, shape, and size as well as the consideration of the effects of nonlinear conductivity of high-aspect-ratio nanopipettes in such areas as scanning capillary microscopy, the patch-clamp method, micro- and nanovolume injection of substances into cells, nanobiopsy, and capillary 3D printing
Thermal conductivity of multilayer hexagonal boron nitride nanoscrolls
Mariya V. Savvateeva, Pilipenko Nikolay V., Igor V. Baranov, Abutrab A. Aliverdiev, Kolodiychuk Pavel A.
266
The article presents a theoretical analysis of the anisotropic thermal conductivity of multilayer hexagonal boron nitride (h-BN) nanoscrolls as promising fillers for thermal interfaces in electronic devices. Traditional thermally conductive composite materials, while possessing high thermal conductivity, are prone to agglomeration within the polymer matrix; their chemical inertness hinders the formation of strong bonds with the polymer, and their high electrical conductivity significantly limits their application in electronics. The h-BN-based material combines high thermal conductivity, excellent electrical insulation properties, and high processability for integration into electronic components. An analytical model is proposed to predict the thermal conductivity values of multilayer h-BN nanoscrolls in both the longitudinal and transverse directions. The analytical model for the anisotropic thermal conductivity of multilayer nanoscrolls (scrolled 2D nanoplates) is developed based on the generalized conductivity theory. Key scientific enhancements to existing models include the capability to increase the number of calculable layers and the dimensions of the nanoscrolls. To more accurately describe size effects, an interlayer scattering parameter is introduced for the first time in such a multilayer structure to correct the effective phonon mean free path within the material. Mathematical dependences of the thermal conductivity of multilayer h-BN nanoscrolls on the number of layers were obtained for the directions longitudinal and transverse to the nanoscroll axis. It is shown that as the number of layers increases, the longitudinal thermal conductivity (along the nanoscroll axis) decreases. The transverse thermal conductivity (perpendicular to the nanoscroll axis) is significantly higher than that of their carbon-based counterparts. Due to the absence of quantitative data (both experimental and numerical) for multilayer boron nitride nanoscrolls in available scientific literature, validation of the simulation results was performed on a similar system reported in open sources — a three-layer carbon nanoscroll. The obtained predictive results allow for assessing the influence of the layer count on the thermal conductivity of h-BN nanoscrolls and for synthesizing multilayer nanoscroll structures with a predetermined thermal conductivity value. It is demonstrated that multilayer h-BN nanoscrolls represent a promising alternative to carbon nanotubes in electronics for applications where it is critically important to eliminate “thermal bottlenecks” and ensure high inter-component electrical insulation.

AUTOMATIC CONTROL AND ROBOTICS

275
This paper addresses the problem of safe control of a six-degree-of-freedom robotic manipulator operating in a constrained workspace containing obstacles and potential singular configurations. The aim of the study is to develop an integrated control algorithm that simultaneously ensures obstacle avoidance and singularity prevention while maintaining high end-effector positioning accuracy. The proposed methodology combines a PID controller in Cartesian space, the Damped Least Squares method, and the projection of secondary tasks into the null space. To prevent collisions, an Artificial Potential Field module is used to generate repulsive velocities at the link level. This structure allows adaptive motion regulation under varying workspace geometries and maintains the system manipulability near singular points. Numerical simulation results for two scenarios demonstrate that the proposed algorithm enables the manipulator to reach the target point with a residual positioning error of less than 0.05 m, while the minimum distance to the nearest obstacle remained above 0.18 m, and the manipulability index stayed higher than 0.8. The manipulator exhibited stable behavior without collisions or singularities, confirming the effectiveness and real-time applicability of the developed approach.

COMPUTER SCIENCE

287
An important part of ensuring the continuity of operation of complex systems is information security monitoring which is a continuous process inseparable from the context of the functioning of the protected object. The operational use of monitoring results requires the interpretability of the obtained data and the presentation of key cause-and-effect relationships in a formal and provable form. If the protected object exhibits statistical, behavioral, and process regularities, it becomes possible to form an informative space for identifying information security events. This paper formulates and validates hypotheses regarding the possibility of identifying information security events when the above-mentioned types of regularity are violated as well as the search for a rational interval for the formation of a state. The scientific novelty of the results is determined by the adaptation of formal methods for constructing an informative space for identifying information security events, the introduction and experimental confirmation of hypotheses regarding the impact of an information security event on statistical, behavioral, and process regularities, and the search for a rational analysis interval. The goal of this paper is to provide a qualitatively new method for constructing an informative space for the automatic detection of information security events. The object of the study is the process of monitoring the information security status of a corporate computer network. The subject of this study is heuristic methods for forming an informative space for identifying information security events based on the statistical analysis of retrospective data in real time. This paper proposes a method for automatically forming an informative space for identifying information security events in corporate computer networks. This method is based on the dynamics of two adjacent states of end devices determined over discrete time intervals. The set of such state transitions across all devices forms the state matrix of the computer network under study. This study defined an informative space for calculating the dynamics of the obtained state vectors and found a rational interval for forming the device state when studying the dependence of the difference in the vectors of two adjacent states on the analysis interval in various informative spaces. To experimentally confirm the operability of the proposed solution, a set of network data in the PCAP (Packet CAPture) format was analyzed, including legitimate and botnet activity of Internet of Things devices. Graphical interpretation of the obtained result allows one to determine the attack preparation and attack start times, which significantly simplifies the task of information security monitoring at the input data analysis stage and reduces the amount of data analyzed by the information security analyst. Distinguishing features of the proposed method include real-time operation, the absence of a preprocessing stage for input data, and the interpretability of detected information security events. Clearly discernible trends in device status dynamics allow for a reduction in the volume of analyzed information and the focus on irregularities that characterize potential information security events. The scope of application of the proposed method includes monitoring information security events, identifying information security incidents, and detecting intrusions in corporate computer networks.
295
Deep learning approaches have been increasingly adopted for virtual analog modeling, which aims to replicate the sonic characteristics of analog audio devices. In the context of analog dynamic range compressor modeling, many existing methods operate directly on raw audio waveforms which are high-dimensional and contain fine-grained temporal features at high sampling rates. These representations are computationally demanding and limit model efficiency. We propose a feature extraction pipeline that leverages the magnitude component of the Short-Time Fourier Transform in combination with a spectral amplification mechanism which acts similarly to a spectral mask but can both attenuate and amplify selected frequency components. We employ multi-band Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures that split the magnitude spectrum into several frequency bands for independent processing, substantially reducing computational complexity while preserving high modeling accuracy. To evaluate our approach, we created two datasets consisting of recordings of the consumer-grade analog compressor Alesis 3630 and its digital counterpart, discoDSP NightShine. We conducted extensive experiments comparing our method against raw waveform baselines using four objective metrics, theoretical and empirical measurements of computational performance, and a subjective listening test. Results indicate that single-band models based on the proposed feature extraction pipeline outperform raw-audio baselines across all evaluation metrics. Multi-band configurations further improve the efficiency balance. In particular, four-band LSTM and GRU architectures achieve higher perceptual fidelity at substantially lower computational cost. Moreover, we conducted a subjective listening test that yielded results aligned with the objective metrics. All source code and pretrained models are provided for reproducibility.
306
The widespread adoption of wearable devices and smart home systems indicates a significant growth in potential use cases for such solutions. The abundance of devices and the need for convenient interaction with them drive the active development of approaches implementing various aspects of this interaction. Currently, speech is one of the most convenient human-machine interfaces. Advances in audio and speech signal processing and analysis technologies enable the successful solution of complex tasks, such as automatic speech recognition, speaker identification and verification, and the detection of emotions, gender, and age of the speaker. The applicability of such technologies typically requires significant computational resources, often unavailable to wearable devices and smart home systems. Addressing isolated audio/speech analysis tasks significantly limits human-machine interaction scenarios. Attempts to combine various technologies on a single device lead to increased demands on computational resources. Currently, greatest interest lies in technologies for multi-task audio/speech signal analysis with reduced computational requirements, allowing their application in wearable devices and smart home systems. This paper proposes a method for the automatic construction of hierarchical multi-task models for audio/speech signal analysis. This method determines task compatibility while maintaining overall accuracy for all tasks and significantly reducing the number of trainable parameters in the multi-task model. In the first stage, isolated recognition models are trained for each target task, and the metrics of these models are determined. The second stage involves determining the pairwise compatibility of audio/speech analysis tasks by iterating over the number of shared layers in a deep neural network. In the final stage, the final hierarchical architecture implementing the multi-task recognition model is automatically formed. It is demonstrated that, compared to baseline approaches, the developed method allows for the creation of a compact hierarchical model. Compared to a set of independent single-task models, the proposed architecture shows a 56 % reduction in the number of trainable parameters with an accuracy drop of no more than 1.9 %, whereas a classical (“flat”) multi-task architecture exhibits an accuracy reduction of 2.7 %. Applying existing multi-task model optimization approaches, LT4REC and the Lottery Ticket Hypothesis, leads to accuracy reductions of 9 % and 6.5 %, respectively. The results of this work have practical significance for the smart device industry (smartphones, wearable gadgets, smart speakers). The proposed algorithm enables the creation of efficient audio analysis systems capable of performing multiple functions simultaneously with minimal requirements for computational resources and memory when deployed on resource-constrained devices.
315
The rapid growth of Internet of Things (IoT) devices is accompanied by increasingly sophisticated security threats, including DDoS attacks, brute-force authentication attempts, and large-scale packet flooding. Traditional statistical methods for anomaly detection exhibit low robustness to noise and fail to account for the dynamic nature of IoT traffic. This results in a higher rate of false positives and reduced accuracy in attack identification. This paper proposes a hybrid approach to IoT traffic anomaly detection consisting of three stages: preliminary filtering of suspicious packets using a modified Z-score adjusted for sample size; adaptive probabilistic attack risk assessment based on a Bayesian classifier with a weighting function that amplifies the impact of significant deviations; final classification using an ensemble of models (Random Forest, SVM, and LSTM), which ensures robustness to noise and enables the identification of nonlinear dependencies in the data. Experimental evaluation on the UNSW-NB15 dataset, which includes both normal traffic and diverse attack scenarios, demonstrated that the proposed method achieved Precision = 89.1 %, Recall = 90.3 %, and F1-score = 89.9 %. The best results were observed in the analysis of message interval anomalies (up to 92 % accuracy), confirming the effectiveness of temporal features. The method outperformed classical algorithms (Rosner Test, Holt-Winters) and achieved comparable accuracy to autoencoder while requiring significantly fewer computational resources. The hybrid architecture enables adaptation to diverse attack types and reduces false alarms through the combination of statistical filtering and ensemble classification. Its noise resilience and low computational complexity make the method suitable for deployment in resource-constrained IoT environments. Future research directions include the integration of federated learning for decentralized anomaly detection and the use of self-adaptive neural architectures for predicting complex attack scenarios.
324
This article addresses the significance of Gang of Four (GoF) design patterns as formal architectural solutions in object-oriented programming and emphasizes the importance of their automated detection in modern software systems. This study examines the challenges of identifying architectural solutions in extensive software systems and the constraints of conventional analytical approaches. The scientific novelty of the suggested method is in the utilization of contemporary transformer-based language models trained on source code integrated with conventional machine learning techniques for identifying structural patterns. The proposed approach employs the DeepSeek-Coder-V2 framework to generate multidimensional vector representations (embeddings) of code segments. We employ Principal Component Analysis to reduce dimensionality. The resultant embeddings serve as features for training and testing various classifiers, encompassing both linear and nonlinear models. The objective is to autonomously identify design trends. We developed a bespoke annotated dataset of 23 GoF patterns and additional architectural patterns derived from actual open-source projects. Experiments demonstrate that transformer-based code embeddings significantly outperform conventional feature extraction techniques, achieving a macro-averaged F1-score of up to 0.82. The test demonstrates that the embeddings accurately represent both the syntactic and semantic characteristics of the source code. The proposed strategy is more versatile and capable of managing a broader array of scenarios compared to manual or heuristic-based solutions. It functions effectively for pattern recognition tasks and can be utilized to analyze extensive codebases. Potential applications encompass rectifying, sustaining, and enhancing the quality and comprehension of software architecture. This approach establishes a unified framework for subsequent research and advancement in software engineering.
331
The task of link prediction is one of the key challenges in the field of social network analysis. The common way to build such systems is based on the idea of decomposing a task into two levels. At the first level, links within ego-nets are predicted; at the second, the results are aggregated to form the final predictions. The accuracy of such systems depends on the first-level model. Heuristic methods are usually used here. The focus of this work is on developing a new supervised model to improve the quality of link prediction within ego-nets. The heterogeneity of the edge attributes, the absence of node features, and the dynamic nature of ego-nets distinguish this task from others. The proposed method belongs to the class of graph neural networks. Its key feature is the ability to effectively consider the topology of the graph along with the attributes of the edges, without relying on the properties of the nodes. This effect is achieved by modeling the hidden state of node pairs, rather than the state of each node individually. The iterative nature of the model makes it possible to propagate knowledge about the relationships between nodes, increasing the complexity of the structures considered with each step. To measure the accuracy of the model, the Ego-VK dataset was used. This dataset consists of a set of ego-nets from a subsample of users of the VKontakte social network. The model is compared with the classical Adamic-Adar method as well as modern approaches based on graph neural networks. Experiments show that the proposed model is significantly superior to the baselines with respect to NDCG@5 ranking quality metric. The results demonstrate the high effectiveness of the proposed model, and the possibility of integration into distributed systems makes it widely applicable in the industry.
Multi-task human’s psychological profile analysis based on text data using semi-supervised learning
Darya O. Koryakovskaya, Axyonov Alexandr A. , Ryumina Elena V., Ryumin Dmitry A.
337
Multi-task analysis of a human’s psychological profile enables a more holistic representation of the individual, which is particularly valuable in personalization systems, HR technologies, and human–Artificial Intelligence interaction. However, such studies have not been conducted to date due to the lack of datasets jointly annotated for both emotion and personality traits, rendering conventional multi-task learning infeasible. We propose a semi-supervised cross-domain learning method that effectively integrates two separately annotated corpora, CMU-MOSEI (for emotion recognition) and ChaLearn First Impressions v2 (FIv2) (for personality trait assessment), without requiring additional labeling. The experimental setup comprises two stages: first, independent single-task models are trained to extract domain-specific features and generate baseline predictions; second, a joint cross-domain model with cross-attention blocks fuses emotional and personality-related representations. Final predictions are obtained by averaging the outputs of the single-task and joint models, enhancing robustness. We compare pre-trained encoders (Jina-v3 and BGE-en) and contextual decoders (Transformer and Mamba), using a hybrid loss function that combines supervised and semi-supervised components with confidence-based pseudo-labeling. Experiments show that the best performance is achieved with the Jina-v3 encoder and the Mamba contextual model: mWACC = 62.52 % (Mean Weighted Accuracy Classification) and mMF1 = 61.03 % (Mean Weighted F1-Measure) on CMU-MOSEI (Multimodal Opinion Sentiment and Emotion Intensity) corpus; mACC = 88.80 % (Mean Accuracy) and mCCC = 25.44 % (Mean Concordation Correlation Coefficient) on FIv2. The model demonstrates stable knowledge transfer across tasks and outperforms current state-of-the-art methods. Attention visualization via Grad-CAM confirms the interpretability of predictions. The proposed method enables the development of scalable text-based psychological profiling systems under realistic annotation scarcity. It is applicable in recruitment, adaptive learning platforms, personalized chatbots, and computational psychometrics where simultaneous consideration of emotional states and stable personality traits is essential.
349
The implementation of various electronic document management systems necessitates protective measures against information security threats, which can lead to operational failures, financial losses, disruption of plans, and damage to business reputation. In this regard, the objective of the study is to enhance the security level of information flows in corporate sector electronic document management systems against information blocking threats initiated by internal users. To achieve this objective, a system of interconnected mathematical models is proposed, forming the conceptual foundation of a digital twin for analyzing the current state of information flows in the electronic document management system. The developed approach enables quantitative assessment of the impact of users’ violations of electronic document processing regulations on business processes. Based on the modeling results, optimization problems have been formulated and solved to develop strategies for managing information flow movement under conditions of uncertainty. The obtained results establish an objective foundation for formulating specific recommendations to improve document management processes with respect to information security aspects.
357
To date, several Field-Programmed Gate Array (FPGA) implementable computational architectures have been proposed that can be used for neural network training in real-time by the backpropagation algorithm. However, they are intended for small neural networks or have a significant reduction in maximum clock frequency as network sizes increase. The novelty of this work lies in addressing the problems of ensuring a predictable maximum clock frequency and minimizing its degradation when scaling the computational architecture. The proposed architecture solves these problems at the level of computational organization. The architecture comprises an array of computational blocks which are based on FPGA digital signal processing blocks and perform most computations in parallel. The architecture also contains the shared block that sequentially processes the computation results received from the array blocks. The equations were derived showing that the latency of computations increases linearly with neural network sizes. After a computational block instance, the shared block and neural networks containing various numbers of computational blocks had been implemented on the FPGA, their timing characteristics were assessed. It has been determined that the data path delays of the buses connecting the shared block with the array blocks are the primary factors constraining the maximum clock frequencies of neural networks. When the number of the array blocks lies in the range 3–240, the maximum clock frequency is from 112 down to 77 MHz. Compared to the closest counterpart, the critical paths in the proposed architecture are shortened because some computations are transferred to the sequential mode; however, this transfer may increase the latency of calculating the local gradients of the hidden layers neurons. When the number of the array computational blocks grows from 3 to 128, the maximum clock frequency decreases by 27 % compared to 52 % for the closest counterpart. Growing the number of computational blocks in the proposed architecture from 128 to 240 reduces the maximum clock frequency by no more than 5 %. FPGA-based neural networks of the proposed architecture are suitable for object tracking and system identification, which are typical applications of neural networks trained in real-time mode.
367
The article addresses the problem of detecting domains generated by Domain Generation Algorithms (DGA) which are widely used by attackers to build robust botnet control channels and covert communication. Traditional methods are based on manual feature engineering or specialized neural network architectures that reduce their robustness to evolving DGA families. The scientific novelty of the proposed approach lies in the use of Large Language Models (LLM) by leveraging their contextual adaptation mechanism to identify hidden patterns in domain names and classify them. The developed approach is based on the use of LLMs which receives examples of legitimate and generated domains within the context. To improve the efficiency, example selection strategies (TopK, VoteK), various metrics of data homogeneity and variability are used. Additionally, the influence of the domain name length and entropy on the stability of the approach is analyzed. The experimental part is performed on a dataset including 68 DGA families and a subset of legitimate Tranco domains. The training set included 54 families, and testing took place on all 68 families, including previously unseen 14 families. Results showed the efficiency of the approach: precision = 0.93, recall = 0.95 and F1-measure = 0.94. The ability of LLM to generalize rules to new DGA families is confirmed. Compared with existing methods, the proposed approach does not require additional retraining and provides flexibility due to contextual adaptation. It demonstrated resistance to noise and the capability to detect new DGA families, which makes its application promising in the field of cybersecurity. At the same time, the sensitivity of the model to the length of domain names and the need for context balancing were revealed. Promising areas of development are the integration of additional features (DNS metadata, query time series) and methods for adaptation to stream processing.
378
This article discusses the problem of choosing the optimal description level when designing digital circuits in Hardware Description Language (HDL). The relevance of this research is due to the fact that manual optimization of design to improve its characteristics often conflicts with maintaining readability, configurability, and tight development deadlines. At the same time, the constructs, idioms, and practices of abstract (behavioral) hardware description offered by modern HDLs are supported by optimizing logic synthesizers in modern CAD systems for Field-Programmable Gate Array (FPGA) with varying levels of quality. Existing sets of evaluation tests (benchmarks) often focus on integrated performance indicators, preventing a detailed assessment of the effectiveness of specific code transformation mechanisms. The main goal of this work is to conduct a comparative analysis of modern FPGA CAD systems and create a set of recommendations for the effective use of HDL without compromising the quality of synthesized solutions. The research is conducted in several stages. The first stage involves classification of known optimization methods used in the transformation of Verilog/SystemVerilog-designs into a structural representation. Based on the resulting classification, synthetic tests are developed to verify optimizations, related to specific classes. These tests consist of pairs of behaviorally equivalent designs, one of which is optimized manually, while the other has redundancy due to the use of abstract or inflated Verilog/SystemVerilog language constructs and/or behavior description patterns. The “gap” in the characteristics of these implementations allows us to draw conclusions about the level of CAD efficiency in the application of specific optimizations. A three-level classification of optimizations of behavioral descriptions of hardware is proposed. Within the framework of this classification, a test package of 19 tests has been developed, selectively aimed at evaluating optimizations belonging to different levels of the proposed classification. These tests have been applied to a number of modern FPGA CAD systems (Vivado, Quartus, Yosys). A consistent decrease in the effectiveness of coding patterns is demonstrated as they approach the behavioral level of description, with the differences between different CAD systems increasing as the level increases. A significant drop in quality of results is observed when evaluating behavior optimization at the level of multiple clock cycles. Based on the results obtained, practical recommendations are formulated for developers of digital equipment on the style of writing HDL code, allowing the most effective use of the capabilities of specific synthesizers. The results of the work allow us to identify the common “core” of HDL and logic description patterns without compromising the quality of the synthesized equipment as well as to determine promising directions for further improvement of synthesizers and HDL.

MODELING AND SIMULATION

Spheroidal models of ore deposits in the framework of gravity tomography
Sizikov Valery S., Karmanovskiy Nikolay S., Rushchenko Nina G. , Alexander V. Belozubov
385
This paper presents a solution to the gravimetry problem of determining ore deposits in the Earth’s mantle and crust by processing the gravitational field measured at the Earth’s surface. The proposed method addresses this formally technical problem by creating a mathematical model capable of computer simulation. Existing gravimetry approaches to locating deposits require the use of technical means, particularly drilling rigs. The proposed method makes it possible to estimate the occurrence of deposits by computer processing of the measured gravitational field on the Earth’s surface. The essence of solving the forward gravimetry problem consists of calculating the model (or measured) gravitational field at the Earth’s surface by dividing each deposit body into a set of vertical rods. When solving the inverse problem of determining the deposit, each body is modeled by a homogeneous spheroid. Known calculation relationships for the gravitational field of a spheroid are transformed into a form convenient for computer implementation using nonlinear programming. The spheroid parameters are determined using the Tikhonov smoothing functional minimization method with parameter constraints. This makes the inverse ill-posed (unstable) problem unambiguous and stable. The proposed method is illustrated by a numerical model example with a two- and five-body deposit. The inverse gravimetry problem is treated as gravity tomography, or “inner vision” of the Earth’s mantle and crust, allowing for deposit visualization without drilling into the Earth’s interior. The described algorithm enables mathematical and computational methods to determine the possible presence of a deposit and estimate its parameters (type, size, depth, density, etc.) with minimal technical and financial investment. Gravity tomography results can serve as an initial approximation when selecting well locations and depths. Existing gravimetric approaches require the use of technical means (drilling rigs, etc.) to locate deposits in the Earth. The presented method, however, allows for the estimation of deposit locations through mathematical and computer processing of the measured gravitational field on the Earth’s surface without the use of expensive technical means. Gravity tomography results can serve as an initial approximation when searching for deposits using technical means during well drilling.
Prediction of maximum stresses in the shaft–insert system using a neural network
Aleksei I. Borovkov, Anna S. Karchevskaia, Aleksei D. Novokshenov, Anastasia I. Matveeva, Sergei S. Sherbakov, Nikita M. Klimkovich, Daria A. Podgayskaya, Mikhail M. Poleschuk
393
The reliability of machines largely depends on the accuracy of predicting the stress–strain state of components in tribo-fatigue systems, especially under high operating loads. Traditional finite element analysis provides high accuracy but requires significant computational resources and offers limited flexibility for rapid parameter variation. In recent years, machine learning methods have been increasingly applied in engineering practice. Among them, neural networks are of particular interest, as they allow nonlinear relationships between loads and stresses to be captured while significantly reducing computation time compared to traditional models. This work proposes an approach for predicting maximum stresses in the “shaft–insert” system by combining three-dimensional finite element modeling with subsequent neural network training. A database was created containing the results of numerical experiments for different combinations of bending and contact loads. A fully connected neural network with three hidden layers and different activation functions was used for training. The quality of the model was assessed using standard metrics: Mean Squared Error, Mean Absolute Error (MAE), and the coefficient of determination R2. The trained neural network demonstrated high accuracy in predicting maximum stresses both in the shaft and in the insert. For the training set, the R2 value reached 0.99991, and for the test set it was 0.99984, confirming minimal deviations from finite element results. The MAE was less than 0.006, while the maximum relative error in the test set did not exceed 3.2 %. The developed neural network model demonstrated the ability to reproduce the results of finite element analysis for the “shaft–insert” system while providing a substantial reduction in computation time compared to traditional finite element simulations. The model was constructed for a limited range of loads; therefore, further research should focus on expanding the dataset and including additional materials, which will make it possible to evaluate the scalability of the approach and its robustness under more complex conditions.
402
The problem of optimizing the distribution of pixel density over the viewing area is considered, which ensures a minimum of redundancy of video images with a limited space for camera installations. A solution is presented to eliminate the redundancy of the informativeness of video signals, which leads to excessive resource costs for transmitting, storing, processing and displaying video signals. The proposed approach is based on an integral assessment of the continuous distribution of pixel density over the viewing area in comparison with the required value for solving a given observation task. The surveillance task is formalized — the definition of surveillance spaces and possible camera installation locations. A method for calculating the pixel density distribution over the viewing area is shown followed by optimizing the installation parameters according to the criteria of the minimum value of the redundancy coefficient when the required pixel density is reached or the maximum minimum pixel density with a given limitation on the redundancy coefficient. An integral coefficient and redundancy optimization criteria are proposed, taking into account the nature of the pixel density distribution and an optimization method that allows maximizing the minimum pixel density or minimizing the redundancy value of the video image. It is shown that the use of normalization in terms of both the minimum required pixel density and the length of the viewing area makes it possible to use the proposed criteria for most practical detection and identification tasks with different camera installation parameters. A practical example of using the method is given. The proposed criteria and method make it possible to increase the efficiency of the video surveillance system by reducing resource redundancy while maintaining the required information content. The results of the work are applicable to the tasks of video monitoring of a zone with one or more cameras as well as for solving various surveillance tasks in one zone. They can be used in the development of surveillance systems and computer-aided design programs for such systems.
Generating spatiotemporal network load series in multi-access edge computing tasks using open data
Филянин И. В., Kapitonov Alexander A, Alexey P. Martynyuk
410
Research into decision-making systems in multi-access edge computing systems is often based on an abstract representation of a communication network without network load profiles. The aim of this work was to develop tools for generating spatio-temporal network load data depending on the communication network architecture. In our work, we used stochastic geometry methods and statistical data to form a profile of possible load. To evaluate the performance of stochastic geometry methods, we developed a tool for generating and validating spatio-temporal series with pattern search from the OpenCellID open database of cell towers. During the work, an analysis of literature and public datasets on the location and load of cell towers was conducted. Based on the analysis, it was concluded that the data quality was low for the purposes of training decision-making systems for the placement of computing services in geographically distributed data processing nodes. A tool was also developed to generate and validate spatio-temporal series with pattern search from the OpenCellID open database of cell towers. A comparative analysis of the basic and calibrated Hard-Core Poisson Process algorithms showed significant differences in the characteristics of the generated distributions. For St. Petersburg, the calibrated model provided a 99-fold increase in station density and a 52-fold reduction in inter-station distances with an effective coverage area of 0.04 km2. In the case of Novosibirsk, similar trends were observed with less intensity: a 12.5-fold increase in density and a 21-fold reduction in distances with a coverage area of 0.32 km2. The use of spatio-temporal series obtained with the help of the developed generation tools will improve the quality of training decision-making systems for the placement of computing services through pre-training on data correlated with the actual location of cell towers. In addition, the generation tool allows you to specify the coordinates of the area of the proposed communication network which can also affect the distribution patterns of towers and which in turn will allow you to generate more accurate spatio-temporal series.
420
Modern industrial tasks, such as quality control in laser welding and the localization of geometric features in industrial processes, require the application of innovative machine learning approaches. The scarcity of annotated data and the complexity of geometric annotation are critical barriers to the development of automated inspection systems. The scientific novelty of the proposed approach lies in the comprehensive use of hybrid methods, combining evolutionary optimization, diffusion models, and convolutional neural networks to effectively address practical engineering tasks with limited data resources. The proposed framework consists of two integrated components. The first component implements a hybrid algorithm for synthetic data generation, merging evolutionary optimization for generating diverse geometric variants with diffusion models for synthesizing photorealistic images. The second component involves a specialized deep learning architecture optimized for the precise localization and classification of geometric features in industrial settings. Training is performed using a combined loss function that integrates regression and classification criteria. In the case of laser welding quality control, the synthetic dataset was expanded from 120 original images to 4,537 realistic samples. This augmentation improved weld seam segmentation accuracy, reducing the box loss metric from 2.4 to 0.75. For the task of localizing weld seam coordinates, a prediction error of 31.8 pixels along the Y-axis and 3.3 pixels along the X-axis was achieved at the original resolution of 1,024 × 2,448 pixels. Experimental comparisons showed that convolutional architectures outperformed transformer-based models with a comparable number of parameters, and that regression from a single frame yielded higher accuracy than using a sequence of frames. The proposed methods demonstrate significant superiority over classical data augmentation techniques (e.g., mixup, cutmix) and pure diffusion-based synthesis approaches which require intensive dataset preparation. The integration of evolutionary optimization ensures controlled diversity in geometric variants, while diffusion models guarantee the photorealism of synthesized samples. This hybrid approach holds broad potential for application in other industrial sectors with limited availability of annotated data, owing to its capability to construct a complete pipeline for synthesizing hard-to-obtain industrial data and subsequently using it to train applied Artificial Intelligence methods for solving targeted industrial problems.
Implementation and investigation of a reservoir computer based on a hardware model of three-element spiking neuron
Vladislav S. Kholkin, Vasiliy A. Pchelko, Vladislav L. Klenin, Karimov Timur I. , Ekaterina E. Kopets
428
This paper investigates new computer architectures for the hardware implementation of dynamic (spiking) neural networks capable to replace up-to-date networks built on neurons with a static activation function. We propose for the first time the use of a recently developed compact analog model of a spiking neuron, consisting of only three elements (a volatile memristor, a tunnel diode, and a capacitor), as the basic element of a reservoir computer of Liquid State Machine (LSM) type. A computer model of the reservoir is proposed, including 7,480 neurons and approximately 254,000 connections, with a topology formed using the biologically motivated LSM stochastic synapse distribution algorithm. The results of the proposed solution are demonstrated on the task of recognizing handwritten digits from the MNIST dataset. A classification accuracy of 93 % is achieved, which is comparable to known LSM implementations. Estimates for the proposed reservoir performance of the future hardware implementation exceed those of existing analogs by an order, and in terms of energy efficiency by 3-4 orders. Thus, the proposed study demonstrates for the first time the practical applicability of the three-element neuron model for machine learning tasks and confirms its potential as a basic element for constructing scalable and energy-efficient neuromorphic computing systems.
Analysis of a centerless control scheme for profiles of large-sized shells in the process of their shaping
Shilin Aleхandеr N., Ramez G. Atamaniuk, Egor Yu. Besedin, Mikhail R. Pastukhov
436
Control of the geometric parameters of large-sized shells — the basic parts of energy and oil and gas equipment — is a critically important task that determines the quality and productivity of their assembly. Existing methods based on measuring the elements of a circle have significant methodological errors and require precise centering, which is difficult for large parts with deviations from the round shape, primarily ovality. Development and metrological analysis of a centerless method for monitoring diameter and deviation from roundness, free from methodological errors and allowing improving the accuracy and efficiency of measurements in production conditions has been proposed. The method is based on the fundamental geometric property of a circle, according to which its diameter is equal to the maximum distance between two points on the inner surface. The method is implemented in the form of an optoelectronic device containing a laser rangefinder mounted on a carriage moving along the contour of the shell. The rangefinder performs an angular scan of the opposite section of the inner surface, and the control unit captures an array of distances and determines the diameter as the maximum value in the section. The design of the device ensures compliance with the principles of Abbe and inversion which makes the measurement scheme invariant to positioning errors. To verify the method, a computer simulation of the measurement process was performed for shells with an oval cross-section shape. It is established that the instrumental error of the laser rangefinder (± 1 mm) is the main one and does not exceed the established technological tolerance of 1 % of the nominal diameter. Metrological analysis on oval cross-section models has shown that the error in determining the diameter functionally depends on the amount of ovality, however, within the permissible values of ovality, the requirements of the technological process are met. The developed method and device make it possible to directly control, with high accuracy and without the requirement of centering, the diameter and deviation from the roundness of the inner surface of large shells. The main advantages of the proposed solution are invariance, autonomy, and simplicity of technical implementation. The device can be used for postoperative and acceptance control in nuclear, energy, and oil and gas engineering as well as in other industries to reduce the complexity of assembly and ensure the quality of parts joining.

BRIEF PAPERS

442
This paper presents a novel 1-out-of-n post-quantum oblivious signature scheme based on supersingular elliptic curve isogenies. The proposed scheme is built upon the Commutative Supersingular Isogeny based Fiat-Shamir scheme whose security relies on the hardness assumption of the multiple-target group action inverse problem. This approach ensures resistance against attacks using Shor’s algorithm. The key generation algorithm, the interactive signing protocol, and the verification algorithm are formalized. Experimental evaluation in SageMath demonstrates more than a threefold reduction in communication overhead compared to a lattice-based counterpart.
Copyright 2001-2026 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.

Яндекс.Метрика