Summaries of the Issue

OPTICAL ENGINEERING

585
The widespread use of spectral and hyperspectral methods across various scientific and technological fields necessitates increasingly higher optical quality of spectral systems. The challenge of enhancing image quality for hyperspectral systems employed in imaging spectrometry methods is particularly significant. The reliability of reconstructing the spectral characteristics of research objects depends not only on the dispersing element but also on the chromatic and monochromatic aberrations of the optical system. Insufficient correction of chromatic aberrations necessitates the use of additional software and hardware within the spectral system for reliable reconstruction of the spectral characteristics of research objects. Consequently, a crucial aspect of spectral systems development involves finding optimal combinations of glass types and optical scheme architectures to address these issues. The authors examined existing methods for designing optical systems of apochromatic objectives and set and solved the problem of designing the architecture of an optical scheme with the minimum possible set of glass types which is free of chromatic aberrations and allows obtaining high image quality. The study employs well-established methods for computing optical schemes based on the dispersion properties of glass and the composition of optical systems as outlined by M.M. Rusinov. Preliminary theoretical calculation of the optical design provided the initial configuration of the optical scheme and the choice of glass types. Optimization and analysis of the optical system are performed in Zemax CAD. During the optimization of the initial configuration without changing the glass types; correction of chromatic aberrations was achieved in a range significantly exceeding the width determined in the theoretical calculation. An optical scheme of an objective with diffraction-limited correction of chromatic aberrations across a broad wavelength range (0.5–2.3 μm) has been successfully developed. The objective exhibits well-corrected monochromatic aberrations across the entire operational spectral range and qualifies as an apochromat in terms of image quality. The objective design is technologically advanced, comprising six lenses (without aspherical surfaces) fabricated from two types of glass (LZOS catalog). The architecture of the developed optical scheme can serve as a foundation for designing imaging devices for spectral analysis applications, including hyperspectral and multispectral cameras.
591
This study investigates the impact of Cross-Gain Modulation (XGM) in erbium-doped fiber on the effective spectral bandwidth of a fiber-optic sensor interrogation system employing Fiber Bragg Gratings (FBGs) and a Distributed Feedback (DFB) laser diode. DFB-laser-based interrogators offer high scanning speeds (up to 33 pm/ns) and broad wavelength tuning ranges (up to 10 nm). However, wavelength tuning in such lasers often introduces significant fluctuations in the instantaneous power of the probing pulse — up to 20 dB, which can lead to measurement errors when interrogating FBG-based sensors. To mitigate this, only the portion of the pulse with relatively stable power (within 1 dB) is typically used for analysis. This approach, however, reduces the effective spectral bandwidth of the interrogator by up to 20 %. To solve this problem, we propose, for the first time, the use of XGM in erbium-doped fiber to enhance performance. To evaluate the potential of XGM for increasing the effective spectral bandwidth, we conducted a theoretical analysis of the interaction between two optical signals in erbium-doped fiber: the interrogator probe signal and an additional control signal. The influence of XGM on power stability and effective spectral bandwidth was assessed through numerical modeling using the OptiSystem software. We also examined how the shape of the control signal and the timing of its initiation affect the interrogator spectral bandwidth. This approach enables not only the optimization of the temporal profile of sub-microsecond optical pulses but also their amplitude enhancement through fiber amplification. The results show that XGM can effectively modulate the instantaneous power of the probe signal with a modulation depth of up to 30 dB — sufficient to stabilize the output of DFB laser pulses. Simulations confirm that appropriate shaping and timing of the control signal can significantly reduce power fluctuations. Specifically, using rectangular control pulses decreases the power variation from 20 dB to 7 dB. Furthermore, the duration of stable pulse modulation (within 1 dB of the peak power) increases from 62 ns to 267 ns, leading to a 4.3-fold expansion of the interrogator effective spectral bandwidth. The application of XGM in erbium-doped fiber offers a promising solution for improving the stability of DFB-laser-based interrogators without relying on high-frequency attenuators. This enhancement extends the operational range of the system and relaxes the reflectivity requirements for the FBGs used in the sensor network.
602
Nonlinear optical properties of fluorophosphate glass with quantum dots of cadmium sulfides and selenides (CdS and CdSe) and lead (PbS and PbSe) were studied using a pulsed femtosecond near-IR laser. Fluorophosphate glasses with CdS, CdSe, PbS and PbSe quantum dots were obtained by high-temperature synthesis with additional heat treatment. Nonlinear absorption was studied under the action of a pulsed laser at a wavelength of 1050 nm and a duration of 100 fs. It is shown that the transmission at 1050 nm in fluorophosphate glasses with CdS and CdSe quantum dots is 0.78 and 0.88, respectively. In addition, increasing the average power of femtosecond laser radiation from 30 to 2000 mW does not lead to a change in their transmission. For this wavelength, the transmission was 0.1 for the sample with PbS nanocrystals and 0.65 for the sample with PbSe quantum dots. A decrease in transmission with increasing laser radiation power was shown for glass samples with PbS and PbSe quantum dots, i.e., nonlinear transmission (limiting) was observed. The threshold for limiting the power of laser radiation passing through the sample, i.e., the power at which the transmission decreases by more than 20 %, for the sample with PbS quantum dots was 1265 mW, and for an input power of about 1530 mW, this sample had a transmission of less than 0.1 %. The laser power limitation threshold for the PbSe quantum dot sample was 600 mW, and for an input power of about 750 mW it had a transmittance of less than 0.1 %. Fluorophosphate glasses with lead sulfide and selenide quantum dots can be used as limiting filters to protect photodetectors from pulsed laser radiation in the near IR range.
609
The vibrations of navigation systems, including fiber optic gyroscopes, affect the intensity of radiation passing through their optical components. This can lead to positioning errors in vehicles. The mechanism by which vibrations influence fiber optic gyroscopes and the reasons for their high vibration sensitivity are still not fully understood. This paper investigates the amplitude modulation of the optical signal caused by the vibration of passive optical components. The sensitivity to vibration is evaluated by registering optical power passing through components on an experimental stand while they vibrate at frequencies between 20 and 2000 Hz with amplitude of 5 g. The measurement results are processed using a wavelet transform and fast Fourier transform algorithm. The algorithm estimates and searches for vibration-induced modulation of transmitted radiation. Typical cases of the time sweep of signals passing through optical components are presented. The influence of vibration on transmitted radiation is demonstrated. Modulation of the optical signal passing through Y-splitters from different manufacturers is detected, manifesting as periodic changes in the measured radiation power and changes in the split ratio. An algorithm is presented that enables accelerated analysis by selecting data rationally for subsequent wavelet analysis. The proposed methodology for analyzing modulation based on wavelet analysis enables the sensitivity of optical components to vibration to be estimated and resonant frequencies for Y-splitters to be selected. This methodology enables the identification of modulation levels below 0.1 % of the initial power.
617
The combination of optical emission spectroscopy with models of plasma light emission represents a non-intrusive and adaptable approach for determining plasma characteristics. This study aimed to investigate the plasma electron temperature, electron density, and other parameters of plasma within context DC magnetron sputtering, under a various experimental condition, in the existence of a Niobium target and an Argon:Nitrogen gas mixture. To evaluate electron temperature and electron density, optical emission spectroscopy was employed at a range of discharge voltages (400– 800 V) and gas pressures (0.04–3.3 mbar). The measurements were taken during the deposition of a Niobium nitride coating with in magnetron sputtering setup, maintaining a gap distance of 0.06 m and a total flow rate of 40 Standard Cubic Centimeters per Minute. The temperature of electron was assessed using Boltzmann plot strategy with several ion lines Ar+ lines, while density of electron was determined from the intensity ratio of atomic to ionic lines using Saha- Boltzmann equation. The results demonstrate that, for the plasma under investigation, an increase in the applied voltage lead to an elevation of temperature of electron, while an increase in the working pressure results in a reduction in the temperature of electron. Conversely, the density of electron decreases with the increasing applied voltage and increases with rising working pressure. Additionally, the findings indicate that the introduction of a modest quantity of nitrogen gas into the discharge source resulted in improved electrical characterization of the glowing discharge plasma during the Niobium nitride coating deposition process.
626
In this research, the effect of laser beam energy on the properties and behavior of plasma produced from aluminum and nickel alloy was studied using the Optical Stimulated Emission Spectroscopy method. The properties of the plasma were characterized by exposing the target material (the alloy) to high-energy laser pulses ranging from 500 to 900 mJ using a pulsed Nd:YAG laser with a pulse rate of up to 50 Hz. This ensures a balanced energy distribution and allows monitoring of the increasing effects in the plasma without strong thermal effects. This method allows for a detailed study of the physical properties of the plasma, including the spectral radiation intensity and associated emission peak as well as the study of various plasma properties such as temperature and electron density and other plasma parameters, including the plasma frequency, Debye length, and Debye number. The results obtained show that both temperature and density increase with increasing laser power, with both effects peaking at a laser power of 900 mJ. Calculations of the plasma frequency and Debye number also show a concomitant increase in these two effects with increasing laser power. This work demonstrates how laser power can increase plasma stability and significantly improve physical processes within plasma. It also demonstrates how diagnostic techniques can be useful in plasma analysis and have numerous medical, industrial, and technological applications.
635
Optical encoders based on imaging optical systems and two-dimensional sensor arrays have shown substantial potential for reducing measurement errors through the use of modern image processing algorithms. However, the application of two-dimensional photodiode array in optical encoders significantly reduces the update rate of positional data. This limitation is critical for wide industrial application. This study presents an optical encoder design that incorporates a high- speed linear photodiode array and an anamorphic optical system utilizing a cylindrical lens, allowing for increased update rates and reduced positional error. A Renishaw RTLC40 tape, with a grating period of 40 μm, fabrication accuracy of ±5 μm/m, and a length of 300 mm, was employed as the encoding structure. The prototype optical encoder was developed using a GL3504-BVM-NCN-AU1 linear photodetector array, a custom designed and manufactured objective (linear field in object plane of 0,84 mm, magnification 10×) with an integrated cylindrical lens and 5CEFA9F23 programmable logic device. The calculation of the object displacement is based on determining the energy centroid of the images of grating scale, which tightly mounted on the object. The error analysis of the proposed encoder prototype was determined using an XD6 LS interferometer and an LTS300/M motorized stage. Positional update frequency was measured with an MSO5074 oscilloscope. The use of a cylindrical lens amplified the irradiance projected onto the photodetector array, achieving a maximum gain factor of three. Update frequency and measurement error of the proposed optical encoder were experimentally determined and equals 10 kHz and 0,94 μm over 290 mm range, respectively. The proposed design of the optical encoder can be used as a recommendation for the development of encoders that employ cylindrical lens- based optical systems. Field-programmable gate array is recommended for grating scale displacement computation. The results obtained may prove valuable for specialists in precision displacement measurement and machine tool construction.

MATERIAL SCIENCE AND NANOTECHNOLOGIES

643
Currently, antitumor drug therapy is represented by three directions: chemotherapy, targeted therapy and immunotherapy. Chemotherapy is a non-specific treatment that uses chemicals that inhibit cell proliferation, affect cellular DNA or RNA and cellular metabolism which contributes to the destruction of all dividing cells. Six-membered heterocyclic ring — 1,3,5-triazine and its derivatives are increasingly found in the literature as DNA alkylating agents. One of such triazine derivatives, N-(2-(2-(2-(2-azidoethoxy)ethoxy)ethyl)-4,6-di(aziridin-1-yl)-1,3,5-triazin-2-amine, previously obtained in our research group, was characterized, and its structure was optimized using the Density Functional Theory (DFT) method, B3LYP functional and 6-31G basis set. The theoretically obtained spectral characteristics were confirmed by practical results with a high degree of convergence. In this work, quantum chemical calculations were performed at different DFT levels using the ORCA software package. The structure of N-(2-(2-(2-azidoethoxy)ethoxy)ethyl)-4,6- di(aziridin-1-yl)-1,3,5-triazin-2-amine was optimized using the B3LYP functional with the 6-31G basis set. 1H and 13C (DMSO-d6) Nuclear Magnetic Resonance spectra were recorded on a Bruker 300 Avance instrument at frequencies of 400.0 and 100.0 MHz, respectively. At the first stage of computer modeling, the electronic structure of the molecule was calculated using the DFT method and the geometry was optimized. The calculation was performed in the 6-31G basis set with the B3LYP functional and taking into account the polarization of the solvent (water) with a relative permittivity of 78.54. The charges on the atoms were estimated using the Mulliken scheme. The energy values (eV) for the molecule are: HOMO: –6.279, LUMO: –1.147. The optimized structure was stable, and the charge distribution on the atoms allows us to conclude that there are three possible conformations of N-(2-(2-(2-azidoethoxy)ethoxy)ethyl)-4,6- di(aziridin-1-yl)-1,3,5-triazin-2-amine. In the next step, for calculations with periodic boundary conditions, 20 studied molecules and approximately 1.3·105 water molecules were placed in a cubic box with sides of 16 nm; the distance between N-(2-(2-(2-azidoethoxy)ethoxy)ethyl)-4,6-di(aziridin-1-yl)-1,3,5-triazin-2-amine molecules was at least 3 nm, and the distance from the molecule to the wall was at least 1.5 nm. The force field for the OPLS-AA/M system was used; the simulation time was 200 ns with a step of 1 fs. Then, in the GROMACS 2023 package in the NVT ensemble with a Berendsen thermostat and a barostat for 400 ps with a time step of 0.1 fs under the condition at temperature T = 298.15 K and pressure P = 100 kPa, solvation of the system, energy minimization and equilibration were carried out. It is shown that when performing the dynamics of association, these molecules do not form aggregates in an aqueous solution. In this work, the synthesis and characterization of N-(2-(2-(2-(2-azidoethoxy)ethoxy)ethyl)-4,6-di(aziridin- 1-yl)-1,3,5-triazin-2-amine by spectroscopic methods are described. The results of the molecular docking studies are consistent with the in vitro antitumor activity which showed that the compound exhibits maximum efficiency and show approximate binding energies in the range from –1.034 to –4.578 kcal·mol–1 N-(2-(2-(2-(2-azidoethoxy)ethoxy)ethyl)- 4,6-di(aziridin-1-yl)-1,3,5-triazin-2-amine has been demonstrated to have a high affinity for serum albumin, indicating its potential for serum distribution.

COMPUTER SCIENCE

651
This paper addresses the task of generating animations of a digital avatar that synchronously reproduces speech, facial expressions, and gestures based on a bimodal input — namely, a static image and an emotionally colored text. The study explores the integration of acoustic, visual, and affective features into a unified model that enables realistic and expressive avatar behavior aligned with both the semantic content and emotional tone of the utterance. The proposed method includes several stages: extraction of visual landmarks of the face, hands, and body pose; gender recognition for selecting an appropriate voice profile; emotional analysis of the input text; and generation of synthetic speech. All extracted features are integrated within a generative architecture based on a diffusion model enhanced with temporal attention mechanisms and cross-modal alignment strategies. This ensures high-precision synchronization between speech and the avatar nonverbal behavior. The training process utilized two specialized datasets: one focused on gesture modeling, and the other on facial expression synthesis. Annotation was performed using automated spatial landmark extraction tools. Experimental evaluation was conducted on a multiprocessor computing platform with GPU acceleration. The model performance was assessed using a set of objective metrics. The proposed method demonstrated a high degree of visual and semantic coherence: FID — 50.13, FVD — 601.70, SSIM — 0.752, PSNR — 21.997, E-FID — 2.226, Sync-D — 7.003, Sync-C — 6.398. The model effectively synchronizes speech with facial expressions and gestures, accounts for the emotional context of the text, and incorporates features of Russian Sign Language. The proposed approach has potential applications in emotionally aware human — computer interaction systems, digital assistants, educational platforms, and psychological interfaces. The method is of interest to researchers in artificial intelligence, multimodal interfaces, computer graphics, and digital psychology.
663
Protecting IoT devices is a relevant and important task in the context of a constantly increasing number of devices connected to the network and a growing threat of cyberattacks. One of the key solutions to this problem is profiling such devices to increase the security level of the systems in which they operate. The application of machine learning methods represents a promising approach to solving this problem. This study presents a method for profiling Internet of Things (IoT) devices aimed at detecting malicious activity. The proposed solution enables the identification of network events that may indicate the presence of cyberattacks. The essence of the method lies in the creation of individualized behavioral profiles for each IoT device using machine learning algorithms. Profiles are constructed based on the analysis of network traffic. The machine learning models are employed to perform classification and anomaly detection tasks. The study provides a detailed description of the main stages of the proposed approach, including data collection and preprocessing, model selection and training, testing, and evaluation of the effectiveness of the developed solution. In the course of the study, 26 device profiles were constructed using the CIC IoT 2022 dataset. An additional 21 new features were incorporated into the original dataset. The augmented dataset was balanced using oversampling and undersampling techniques. For each device comparative performance evaluations were conducted for Random Forest, XGBoost, and CatBoost models in the context of attack detection as well as for Isolation Forest, Elliptic Envelope, and One-Class Support Vector Machine for anomaly detection. It was demonstrated that the newly proposed features are among the most informative. A comparison of the obtained results with relevant studies confirmed the applicability of the proposed approach for ensuring the security of IoT devices and reducing the risks associated with their operation.
676
The article discusses the role of generative neural networks in the development and optimization of fonts which play a key role in creating aesthetically attractive and functional designs. The main attention is paid to licensing restrictions and insufficient availability of fonts for various world languages, which creates difficulties for designers and typographers in the process of creating text materials. The novelty of the approach lies in the use of the diffusion model as a generative neural network for automatic font creation, including missing glyphs for languages not supported by standard fonts. To solve the tasks set, a diffusion model has been developed which is an algorithm for generating fonts based on the analysis of patterns in the structure of symbols and the logic of their construction. The model is integrated into an application that automates the process of creating font layouts, allowing users to generate new glyphs and fonts tailored to specific language needs. This technique includes preliminary data preparation, network training, and subsequent character generation that mimic the style and composition of the original fonts. During the experiments, the diffusion model demonstrated a high ability to generate high-quality font characters visually similar to the original samples. Font sets with a limited set of characters were used as source data, which allowed us to evaluate the capabilities of the model to create missing glyphs for various languages. The results showed that the developed model successfully reproduces the stylistic features of the original font, which confirms its potential for application in the development of font solutions for global use. The proposed method of font generation is of interest to specialists working in the field of design, typography, and the creation of text materials for various language audiences. The results obtained can be useful when creating fonts intended for use in multilingual projects that require the presence of missing characters.
684
Anomaly detection under conditions of limited data volume represents a pressing challenge across numerous applied domains, including medical diagnostics. Machine learning methods typically rely on the availability of annotated anomalous samples for training, which is often impractical. Existing anomaly detection techniques designed for few-shot or zero-shot scenarios suffer from various limitations. In particular, the common assumption of normally distributed data reduces the accuracy of anomaly classification. In this study, the task of improving the accuracy and completeness of anomaly detection in previously unseen images by leveraging a combination of the Contrastive Language-Image Pretraining (CLIP) and the domain-specific transformer BERT Pre-Training of Image Transformers (BeiT) models. The integration of CLIP and BeiT models enables simultaneous binary segmentation and anomaly classification. Enhanced anomaly detection is achieved through the use of weighted embeddings from each module. Additionally, the automated generation of textual representations based on a Large Language Model significantly enhances the generalization capacity of the system. The performance of the proposed models was evaluated on the Benchmarks for Medical Anomaly Detection test set. For the dermatological domain, a test set was constructed from ISIC-18, ISIC-19, SD-198, and 7-point criteria database. The proposed method demonstrated an average improvement in the ROC-AUC metric by 10.95 % at the image-level and by 0.66 % at the pixel-level compared to existing state-of-the-art solutions. Experimental results confirm the high effectiveness of the proposed approach in anomaly classification and segmentation tasks, showing superior average metric values. Inference analysis revealed that the incorporation of a variational autoencoder within the CLIP+BeiT architecture for centroid generation enhances the model stability in few-shot scenarios. The practical significance of the proposed method lies in its adaptability and robustness to changing data distributions, making it a promising solution for automated anomaly analysis in medical diagnostics, industrial monitoring, and other domains characterized by high data uncertainty.
694
Advances in computer vision have led to the development of powerful models capable of accurately recognizing and interpreting visual information in various fields of knowledge. However, these models are increasingly vulnerable to adversarial attacks – deliberate manipulations of input data designed to mislead the machine-learning model and produce incorrect recognition results. This article presents the results of an investigation into the impact of various types of adversarial attacks on the ResNet50 model in image classification and clustering tasks. Various types of adversarial attacks have been investigated: Fast Gradient Sign Method, Basic Iterative Method, Projected Gradient Descent, Carlini&Wagner, Elastic-Net Attacks to Deep Neural Networks, Expectation Over Transformation Projected Gradient Descent, and jitter-based attacks. The Gradient-Weighted Class Activation Mapping (Grad-CAM) method was used to visualize the attention areas of the model. The t-SNE algorithm was applied to visualize clusters in the feature space. Attack robustness was assessed by attack success rate using k-Nearest Neighbors algorithm and Hierarchical Navigable Small World algorithms with different similarity metrics. Significant differences in the effects of attacks on the internal representations of the model and areas of focus have been identified. It is shown that iterative attack methods cause significant changes in the feature space and significantly affect Grad-CAM visualizations, whereas simple attacks have less impact. The high sensitivity of most clustering algorithms to perturbations has been established. The metric of the inner product showed the greatest stability among the studied approaches. The results obtained indicate the dependence of the stability of the model on the attack parameters and the choice of similarity metrics, which is manifested in the peculiarities of the formation of cluster structures. The observed feature-space redistributions under targeted attacks suggest avenues for further optimizing clustering algorithms to enhance the resilience of computer-vision systems.
703
The Private Set Intersection Protocol (PSI) is one of the fundamental primitives of secure multi-party computations. This primitive allows several parties who do not trust each other to work together to calculate the intersection of their secret sets without disclosing additional information about these sets. This allows users to jointly analyze data without revealing confidential information to each other. This paper describes a new protocol for the intersection of private sets for 3 or more participants. The protocol works in a network with a “ring” type topology which minimizes the number of necessary communication channels between users. The protocol is based on the idea of conditional zero-sharing which allows using a secret sharing scheme to determine whether an element of the set belongs to all users or not. To evaluate the performance of the proposed solution, a software implementation of the protocol in C++ is proposed. The security of the developed protocol for three or more users is shown, provided that users do not conspire with each other for an “Honest- But-Curious” attacker model. Proposed model implies that the attacker is one of the protocol participants who performs the protocol correctly, but can analyse the information obtained during this process to gain benefits. The security of the protocol is based only on the assumption that the attacker lacks information to obtain any useful data from the messages received during the protocol execution. Thus, this protocol is information-theoretically secure. The presented protocol can be used for confidential data analysis, for example, when several companies exchange information about common customers. The protocol allows three users to find the intersection of sets of sizes 106 in about 42 s. In the present implementation, it is possible to add multithreading or transfer large matrix calculations from the processor to the GPU.
710
Modern industrial search engines typically employ a two-stage pipeline: fast candidate retrieval followed by reranking. This approach inevitably leads to the loss of some relevant documents due to the simplicity of algorithms used in the first stage. This work proposes a single-stage approach that combines the advantages of dense semantic search models with the efficiency of inverted indices. The key component of the solution is a K-sparse encoder used to convert dense vectors into sparse ones compatible with inverted indices of the Lucene library. In contrast to the previously studied identifiable variational autoencoder, the proposed model is based on an autoencoder with a TopK activation function which explicitly enforces a fixed number of non-zero coordinates during training. This activation function makes the sparse vector generation process differentiable, eliminates the need for post-processing, and simplifies the loss function to a sum of reconstruction error and a component preserving relative distances between dense and sparse representations. The model was trained on a 300,000-document subset of the MS MARCO dataset using PyTorch and an NVIDIA L4 GPU. The proposed model achieves 96.6 % of the quality of the original dense model in terms of the NDCG@10 metric (0.57 vs. 0.59) on the SciFact dataset with 80 % sparsity. It is also shown that further increasing sparsity reduces index size and improves retrieval speed while maintaining acceptable search quality. In terms of memory usage, the approach outperforms the Hierarchical Navigable Small World (HNSW) graph-based algorithm, and at high sparsity levels, its speed approaches that of HNSW. The results confirm the applicability of the proposed approach to unstructured data retrieval. Direct control over sparsity enables balancing between search quality, latency, and memory requirements. Thanks to the use of an inverted index based on the Lucene library, the proposed solution is well suited for industrial- scale search systems. Future research directions include interpretability of the extracted features and improving retrieval quality under high sparsity conditions.
718
The paper considers the comparative analysis of string datasets represented as time series of samples. We propose a method to increase the accuracy of determining differences between two samples. Based on this method, a method for analyzing time series of three samples has been developed, which allows for more accurate changes investigation between samples. The use of three samples in the analysis is due to the specific nature of the practical task of processing metagenomic sample sequencing data, obtaining a larger number of which is very resource-intensive. To classify strings from one sample into detected and undetected strings in another sample, a method of comparing two samples using k-mers and the de Bruijn graph is proposed. It implements decision rules based on statistics of the frequency of k-mers occurrence, different values of the parameter k, and information about possible errors in the strings. To analyze time series of three samples (the original and final sample for one object and the modifying sample for another object), a method based on pairwise comparison of samples is developed. It is used to divide the strings of each sample into groups depending on the detection of strings in other samples. The developed method for analyzing time series has been tested on two types of generated metagenomic data, represented as a set of strings. It was shown that the method allows distinguishing organisms that have differences in genomes in at least one symbol for every 10,000 symbols. High (more than 80 %) recall and precision of the results of string classification were demonstrated when analyzing simulated complex data, with properties comparable to real data. The developed method allows comparing metagenomic samples represented as a set of strings using only the data itself and without requiring additional information. This allows for a more accurate analysis compared to existing methods that compare samples based on the results of string classification in taxonomic annotation databases. The developed methods can also be used in other areas of string data processing such as analyzing changes in author style when writing a series of texts.
727
The practice of assessing IT-security risks of Critical Information Infrastructure (CII) facilities is considered. The methods of Event Tree Analysis (ETA), Fault Tree Analysis (FTA), and the International Standard ISO/IEC 27005:2022, which establishes the principles of risk management, were compared. The ways of supplementing the existing methodological requirements of the Russian Federation in the field of IT-security of CII facilities with modern methods of assessing IT-security risks are shown. A comparison of modern methods for assessing IT-security risks is carried out using the example of a water supply management system. The application of the necessary list of protection measures providing a given level of residual IT-security risks is justified. The possibility of using modern methods for assessing the IT-security risks of CII facilities in addition to the existing methodological requirements of the Russian Federation is demonstrated.
737
The problem of optimizing large neural networks is discussed using the example of language models. The size of large language models is an obstacle to their practical application in conditions of limited amounts of computing resources and memory. One of the areas of compression of large neural network models being developed is knowledge distillation, the transfer of knowledge from a large teacher model to a smaller student model without significant loss of result accuracy. Currently known methods of distilling knowledge have certain disadvantages: inaccurate knowledge transfer, long learning process, accumulation of errors in long sequences. The methods that contribute to improving the quality of knowledge distillation in relation to language models are proposed: selective teacher intervention in the student’s learning process and low-level adaptation. The first approach is based on the transfer of teacher tokens when teaching a student to neural network layers, for which an exponentially decreasing threshold of measuring the discrepancy between the probability distributions of the teacher and the student is reached. The second approach suggests reducing the number of parameters in a neural network by replacing fully connected layers with low-rank ones, which reduces the risk of overfitting and speeds up the learning process. The limitations of each method when working with long sequences are shown. It is proposed to combine methods to obtain an improved model of classical distillation of knowledge for long sequences. The use of a combined approach to distilling knowledge on long sequences made it possible to significantly compress the resulting model with a slight loss of quality as well as significantly reduce GPU memory consumption and response output time. Complementary approaches to optimizing the knowledge transfer process and model compression showed better results than selective teacher intervention in the student learning process and low- rank adaptation separately. Thus, the quality of answers of the improved classical knowledge distillation model on long sequences showed 97 % of the quality of full fine-tuning and 98 % of the quality of the low-rank adaptation method in terms of ROGUE-L and Perplexity, given that the number of trainable parameters is reduced by 99 % compared to full fine-tuning and by 49 % compared to low-rank adaptation. In addition, GPU memory usage is reduced by 75 % and 30 %, respectively, and inference time by 30 %. The proposed combination of knowledge distillation methods can find application in problems with limited computational resources.
744
The article proposes an algorithm of a Brain Computer Interface (BCI) for implementation of interaction between a human and a model of an industrial cyberphysical system. The interface facilitates selecting a conceived tool on the basis of the classification of evoked responses of a test person’s encephalogram to visual stimuli (tool images). To conduct the study there has been designed a software system operated with a web-server, a controller, and a user BCI. The cerebral bioelectrical activity of a test person has been constantly registered with the encephalograph produced by LLC MITSAR followed by online signal processing conducted by the designed original software system. The stored evoked responses to stimuli have been classified in a variety of ways — peak-based selection, a support vector machine, and a neural net. There has been proved that accuracy of the classification of evoked potentials both with the help of a neural net and a support vector machine are approximately equal and these algorithms can be implemented in the online mode. Analysis of the experiments performed has shown that the proposed algorithm makes it possible to classify presented visual stimuli in neural interfaces in the online mode. The results show how it is possible to organize a ‘deeply integrated’ interaction between a human and an equipment through an impact of commands based on the processed signals of bioelectrical brain activity of a human on a 3D model of a production site.
755
Authentication is a critical challenge in autonomous vehicles, particularly within Controller Area Networks which are prone to various cyber threats. Existing protocols often fall short in balancing strong security guarantees with computational efficiency and privacy preservation. In this paper, we propose a lightweight authentication protocol based on the Decisional Diffie–Hellman problem, specifically designed for Controller Area Network environments. The protocol employs lightweight cryptographic operations to verify vehicle authenticity and validate data messages, while also maintaining anonymity by regularly updating login identities. It also supports password changes without requiring a trusted third party. The protocol security is formally verified using Burrows-Abadi-Needham logic. Performance evaluation shows that our approach significantly reduces computational overhead, achieving an execution time of 0.90908 ms, outperforming existing solutions in the literature. By combining formal verification with practical efficiency, the proposed protocol offers a robust solution for secure and efficient authentication in resource-constrained vehicular networks. Its lightweight design and anonymity-preserving mechanisms make it particularly suitable for real-time autonomous vehicle applications.

MODELING AND SIMULATION

762
Methods for ensuring and evaluating the reliability of wireless reconfigurable multipath networks are considered. The fault tolerance of multipath wireless networks is maintained using a limited number of interconnected switching nodes which allow reconfiguration when traffic is redistributed through the connected path segments. The aim of the work is to increase the reliability of transmissions in multipath wireless networks as a result of justifying the choice of route switches, taking into account the impact on the probability of packet delivery of both failures of switching nodes and combinations of various obstacles to signal propagation along a set of communication paths with the addressable node. To compare network construction solutions, a model is proposed that reflects the influence of the location of path switch nodes on transmission reliability. The study of the reliability of a wireless multipath network is based on a combination of analytical and simulation modeling. An assessment of network reliability involves its decomposition, taking into account possible combinations of failures of path switch nodes and their impact on reconfiguration capabilities as a result of switching path segments that have retained the connectivity of their constituent nodes after failures. The probability of packet loss during transmission over wireless channels is estimated as a result of simulation using OMNeT++ tools. The proposed approach makes it possible to combine estimates of the reliability of the network structure and the process of data transmission (packet delivery) through it, taking into account node failures as well as constant and changing obstacles (conditions) of signal propagation between nodes. The possibilities of increasing the reliability of wireless networks with multipath routing are analyzed as a result of optimizing the placement of a limited number of inter-path gearshift nodes. A simulation and analytical model of the reliability of wireless reconfigurable multipath networks is proposed, which takes into account failures of communication nodes and packet losses with variations in the location of physical obstacles to signal transmission between nodes when assessing the probability of packet delivery. It is shown that the choice of the location of the inter-path switching nodes significantly affects the reliability of packet delivery, while there is an optimal placement of switches, which ensures maximum transmission reliability depending on the distribution of signal transmission obstacles along the paths. The results of the study can be applied in predicting reliability and substantiating design solutions for building fault-tolerant multipath reconfigurable wireless networks. In the future, it is expected to consider more complex network topologies, taking into account the impact of reconfiguration on both reliability and packet delivery delays.
771
An approach is proposed for evaluating the accuracy of navigation systems using data from technical vision sensors and a digital map. The digital map is defined as arch-linear splines approximating the centerline of the railway track. This approach does not rely on satellite navigation data and is relevant for assessing the quality of navigation solutions for mobile transport vehicles operating in urban environments. The proposed approach is based on comparing segmented images containing railway tracks with digital map data. The study examines two comparison methods: the first based on comparing areas using the IoU metric, and the second based on comparing lines and calculating residuals between them. In the first method, the arch-linear spline of the route is projected onto the image frame, creating a road area based on navigation system readings and digital map data. In the second method, the centerline is extracted from the railway track area in the segmented image and compared with the route spline. Since the residuals generated in both cases are nonlinear, the evaluation of navigation system errors is performed using a particle filter, where each particle defines the coordinates and orientation of the “probable” location of the tram. The tram location and orientation are estimated based on the weighted summation of particles, with higher weights assigned to particles that better align measured data with synthesized areas or lines. The proposed methodology was tested on simulated and real data collected from tram routes in Saint Petersburg. Experiments demonstrated that the first method provides higher accuracy compared to the second, attributable to the need for post-processing segmented image data to extract the railway track centerline, which results in a loss of useful information. The study established a relationship between the accuracy of navigation parameter determination and the road curvature radius, showing a decrease in accuracy on curves with larger radii. The approach applicability for assessing navigation errors and its robustness to varying weather conditions and road surface quality were experimentally confirmed. The proposed approach stands out from known methods due to its simplicity and data accessibility. Compared to methods based on lidar data, it does not require expensive sensors or the labor-intensive process of aligning lidar point clouds with high-precision maps. Unlike methods using technical vision, it eliminates the need for creating landmark maps, developing complex identification procedures or matching processes.
780
The paper presents the results of a study of approaches to solving the problem of combinatorial conditional optimization of the refueling plan along a fixed automobile route, taking into account restrictions on tank volume, initial and final fuel volumes as well as constant fuel consumption. Methods for solving such problems are based on the use of algorithms for finding the shortest paths as well as linear programming methods. Their disadvantage is the lack of granularity of states, obtaining non-integer solutions, and high computational complexity. The novelty of the solution lies in the use of an expanded state space and the development of an accurate algorithm that guarantees a high number of plans and a lower asymptotic complexity. The proposed algorithm is based on the application of two-dimensional dynamic programming in which for each node of the route and the remaining fuel, the minimum cost of reaching the state is recalculated by choosing between a transition without refueling and a transition with refueling by one tank division. The algorithm allows solving the problem optimally in polynomial time with quadratic complexity relative to the number of nodes on the route. The method was tested by comparing the proposed algorithm with alternative approaches based on graph representations of the route and linear programming methods. Algorithms for solving the problem were constructed for each approach after which a comparative analysis of their asymptotic complexity, as well as the accuracy and integers of the solutions obtained, was carried out. The proposed algorithm simultaneously ensures the integers of the components of the optimal solution and has a lower asymptotic complexity, unlike the alternative ones. The developed algorithms are applicable to reduce fuel costs during cargo transportation as well as to increase the economic efficiency of tourist trips in Russia. The further direction of the study is related to considering additional factors affecting fuel consumption which will require a transition to higher-dimensional tasks and the development of heuristic methods for their effective solution.
789
Considered new results of studies of eigenvectors and vector functions of discrete and continuous Fourier transforms. It is known that such eigenvectors are products of the Gauss function on Hermite polynomials, a name is proposed for the functions obtained on the basis of this product: Hermite-Gauss wavelets. In the paper studies on the base of mathematical analysis methods of continuous functions and numerical methods, the properties and methods of synthesis of eigenvectors and vector functions of discrete and continuous Fourier transforms are investigated. Expressions for calculating the scale parameter and the normalizing factor for discrete forms of Hermite-Gauss wavelets are obtained. The studies performed to prompt that the scale parameter of the discrete form of Hermite-Gauss wavelets depends on the number of samples, and the norm depends on the number of samples and the number of the wavelet. The form of the Fourier transform matrices is obtained which has good conditionality when calculating eigenvectors in the form of Hermite-Gauss wavelets. Hermite-Gauss wavelets form a basis, and therefore can be used in tasks of signal decomposition and synthesis. For choosing a mother wavelet for decomposition and synthesis, firstly one should be guided by the features and properties of the shapes formed by it. For some signals, Morlaix or Daubechy wavelets can give compact decomposition, for others, Hare wavelets, and there are also signals for which Hermite-Gauss wavelets are most effective for spectral decomposition.
Copyright 2001-2025 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика