Summaries of the Issue


Optical properties of borate family nonlinear crystals and their application in sources of intense terahertz radiation
Lubenko Dmitry M. , Ezhov Dmitry M., Svetlichnyi Valery A., Andreev Yury M., Nikolaev Nazar A.
Nonlinear crystals of the borate family are efficient harmonic generators for intense laser sources because of their high laser-induced damage threshold at near-infrared wavelengths. Recent studies have shown that they exhibit relatively low absorption coefficients at sub-terahertz frequencies, which could enable them to generate terahertz radiation. Based on this assumption, we compare terahertz sources based on the frequency down-conversion of the radiation from a titanium-sapphire amplifier in crystals of barium beta-borate (β-BaB2O4), lithium triborate (LiB3O5), and lithium tetraborate (Li2B4O7). The calculation of collinear three-wave interactions, which provide the generation of the sub- terahertz difference frequency, is carried out considering the previously studied dispersion of the main components of the terahertz refractive index of these crystals. The phase-matching conditions and the corresponding coherence lengths are determined for each of the crystals. Taking into account the quadratic susceptibility tensors, the coefficients of the effective nonlinearity are calculated, and the terahertz generation efficiency in crystals with different cuts is evaluated and compared. The down-conversion in the β-BaB2O4 crystal is numerically shown to be three and five orders of magnitude more efficient than in the LiB3O5 and Li2B4O7 crystals, respectively. Thus, terahertz generation in a sample of β-BaB2O4 crystal with a cut that provides phase-matching for a frequency of 0.3 THz (θ = 5°) has been studied experimentally using radiation from a titanium-sapphire amplifier. The comparison of the experimental data and the numerical results leads to the conclusion that the main contribution to the generation process is given by the o – e → e, e – e → o, and o – o → o types of interaction. The peak terahertz power reaches 20 kW. The data obtained in this work will be useful for the development of intense sub-terahertz radiation sources based on the energy conversion of high-power laser sources. It is estimated that tens of GW of peak terahertz power can be achieved by increasing the intensity of the optical fields to pre-threshold values for the β-BaB2O4 crystal. A source of this intensity can be used in systems for sounding the atmosphere as well as in charged particle accelerators.
A model of a refractive fiber optic sensor sensing element based on MMF-SMF-MMF structure using surface plasmon resonance
Ivoilov Kirill A., Gagarinova Diana O. , Zykina Adeliia A., Meshkovsky Igor K., Plyastsov Semyon A
This paper presents a mathematical model of the sensitive element of a refractometric fiber-optic sensor the principle of operation of which is based on the phenomenon of surface plasmon resonance. The sensing element design is a sequential connection of a multimode fiber (MMF), a single-mode fiber (SMF), and a multimode fiber forming an MMF-SMF-MMF structure. The SMF site is coated with a thin film of gold. To model the element, the approach used in calculating the classical Kretschmann configuration for volumetric optical structures was applied. The refractive index of the fiber is calculated based on the Sellmeyer equation, and the refractive index of the gold is determined using the Drude model. The simulation results are compared with experimentally obtained transmission spectra of fabricated samples of sensing elements. For approbation of the model, the sensing elements of fiber-optic sensors with the following parameters are made: core diameter of multimode fiber 62.5 μm, core diameter of singlemode fiber 9 μm, coating SMF-segment with 50 nm gold film. Transmission spectra of fiber-optic sensor sensing elements in aqueous glucose solutions of various concentrations were obtained. It is demonstrated that the proposed model describes well the experimentally obtained transmission spectra of sensitive elements based on MMF-SMF-MMF structures in the region of surface plasmon resonance. The proposed model can be used to optimize the design of the sensitive element of refractometric fiber-optic sensors in order to increase the sensitivity. The proposed model implies its use in the development of an algorithm for interrogation of sensing elements based on fiber MMF-SMF-MMF structures.


Analysis of frequency-robust multivariable dynamical systems
Roman O. Omorov, Akunova Akylai, Akunov Taalaybek A
We consider the problem of studying the sensitivity of ellipsoidal frequency estimates of quality of multivariable dynamic systems to parameter variations. To solve the problem, we use the apparatus of sensitivity functions of extreme elements of singular value decomposition of real-valued transfer matrices. The joint usage of the apparatus of frequency sensitivity with the method of state space allowed us to construct the models of sensitivity. On the basis of the obtained models, the ellipsoidal estimates of the frequency sensitivity functions for the state, output and error of linear multivariable continuous systems in the form of the majorant and minorant of these functions have been determined. The singular value decomposition of matrices composed of frequency parametric sensitivity functions has been applied to the calculations. The obtained ellipsoidal estimates have the property of minimum sufficiency due to the substantial possibilities of the singular value decomposition of matrices. This approach made it possible to use the elements of the left singular basis corresponding to the extreme singular values, to select in the state, output, and error spaces the subspaces characterized for each frequency value by the largest and smallest normal variation of the amplitude-frequency response. Using the right singular basis made it possible to identify the subspaces in the parameter space which produce the largest and the smallest normal variation of the amplitude-frequency response. The proposed approach has solved the problem of the “optimal nominal” — the choice of the nominal value of the vector of primary physical parameters of the control object aggregates that deliver the smallest value of ellipsoidal estimates of the frequency sensitivity functions to the multivariable controlled process. Such parameters include: dimensions of various parts and characteristics of their manufacturing accuracy, physical properties of materials as well as various values determining their design. The approach made it possible to compare the course of multidimensional controlled processes by ellipsoidal estimates of the frequency parameter sensitivity.


Fractal micro- and nanodendrites of silver, copper and their compounds for photocatalytic water splitting
Sidorov Alexander Ivanvich, Pavel A. Bezrukov, Alexey V. Nashchekin, Nikonorov Nikolay V.
The results of investigation of morphology and photocatalytic properties of thin films in a form of dendrites of silver and copper, and their compounds synthesized by the reaction of substitution, are presented. The morphology and the composition of the synthesized layers were performed by scanning electron microscope. It was shown that already through 2–3 s after the reaction beginning metal nanoporous layers up to 1 μm thick are formed on the substrates. Silver layers consist of micro-crystalline hexagonal plates and micro- and nano-dendrites. As the duration of the reaction increases the layers become more compacted, and the minimum of the pores size becomes 20 nm. In the case of the reaction with the copper salt the formation of copper microdendrites takes place immediately. The internal quantum yield of photocatalysis of water for silver and copper layers as well as for metal-semiconductor layers is 0.4–0.45 %. The obtained results can be used for the creation of photocathodes with large surface for photocathalytic water splitting in order to obtain hydrogen fuel.
Organic thin film transistors (OTFTs) are significant for several reasons because their design processes are less complicated than those of conventional silicon technology which requires complex photolithographic patterning techniques and high-temperature and high-vacuum deposition processes. The more complex procedures used in traditional Si technology can be replaced by low-temperature deposition and solution processing. OTFTs based on the single-layer dielectric medium are poor in reducing the leakage current among the source and drain channel due to the incompatible resistance of dielectric medium. The paper presents a model of a tri-layer dielectric medium based on the organic semiconductor pentacene. In this tri-layer OTFT, three different dielectric mediums are used, such as SiO2, POM-H (PolyOxyMethylene-Homopolymer) and PEI-EP (PolyEthyleneImine–Epoxy resin), for reducing the leakage current and enhancing the mobility among the source and drain channel. The parameter values, such as drain current IDS, threshold voltage Vt and mobility for the designed tri-layer dielectric OTFT, are evaluated and compared with the single layer and bi-layer OTFT models. Thus, the attained mobility, drain current and threshold voltage for the proposed OTFT model are 0.0215 cm2/(V·s), –4.44 mA for –10 V gate and –2.5 V drain voltage (VDS) and threshold value 0.2445 V (Vt) for gate voltage –10 V (VG). These attained parameter values are greater than the single- and bi-layer dielectric OTFT models. Thus, the mathematical modeling of the designed tri-layer dielectric OTFT model enhances the electrical characteristics of the other OTFT models.
The IR spectra of thin films of a mixture of carbon dioxide and water were obtained using the physical vapor deposition method. They were researched in the temperature range of 11–180 K. Based on the results of the research; the formation of hydrates and clathrates was investigated. Several methods were used in the course of this research. These methods are mass spectroscopy, IR spectroscopy, and optical analysis of the thin films formed. Not only the molecular composition but also the state of the structure of molecular mixtures can be determined via Fourier transform infrared spectroscopy (FTIR). Additional data were needed to confirm the emergence of certain structures of carbon dioxide and water mixtures. The mass spectroscopy method and interference pattern analysis were utilized to obtain that data. Hydrate and gas hydrate structures of CO2 do form in the mixture of carbon dioxide and water. This was confirmed in the course of the experiments. The CO2 molecules are contained in their structures by the hydrate compounds formed, which prevents CO2 from sublimating at the sublimation temperature of free CO2 (93 K) under the pressure of P = 0.5 μTorr. Meanwhile, the sublimation temperature of CO2 molecules bound in hydrate structures becomes equal to 147–150 K. The ratio of CO2 and H2O concentrations was chosen to be 25 % and 75 %, respectively. For this ratio, the changes in the spectra and the results obtained via mass spectroscopy indicate incomplete hydration of the mixture. Still, some CO2 molecules remain free and sublimate at a lower temperature. It was found that the concurrent increase in the refractive index and decrease in the concentration of H2O from 100 % to 25 % indicate the growth of the formations that are less dense compared with the amorphous structures of CO2 and H2O condensates. The results obtained in the course of this research broaden the knowledge of the processes of clathrate and hydrate formation in mixtures of CO2 and H2O, the physical characteristics of their structures, and the changes in their characteristics depending on the way they are formed


The use of modern video surveillance systems is associated with solving the tasks of monitoring the activities of personnel and compliance with the technological process based on the analysis and processing of large amounts of video data. This leads to an increase in the cost of information storage, the cost of staff time resources to search for key events over long time periods. The problem of increasing the information value of stored data from video surveillance cameras based on frame filtering and entropy estimation is considered. The implementation of algorithms for processing and compressing information aimed at reducing the volume of stored video data is proposed. The use of this implementation contributes to increasing the overall information value, the efficiency of video surveillance systems by optimizing the volume of stored information and increasing the ratio of useful information. To increase the informational value of video data, a method is proposed that includes the use of modern video compression technologies, a frame filtering algorithm, and an evaluation of the processed video by the Shannon entropy metric. The analysis and comparison of existing video data compression algorithms are performed. An experiment was carried out, as a result of which the correlation between high entropy values and the information value of the frame was proved, the frame filtering algorithm was successfully tested, which allowed to increase entropy by 5.4 times and reduce the duration of the video by 8 times. The use of video data compression methods and efficient codecs, for example, H.265/HEVC, reduced the file size by 14.57 times compared to the original one. The approbation of the proposed method is considered when solving problems of filtering, transmitting, and storing of video data to increase the information value of video data, the productivity of the analysis and information retrieval processes by reducing redundant, useless data fragments. The advantage of the presented method is to remove redundant frames based on motion analysis and entropy estimation of video data, a combination of various approaches to reduce the volume of transmitted and stored information. The application of the method will increase the efficiency of data storage in various video surveillance systems (for logistics centers, warehouse complexes, retail premises).
Attacks on web applications are a frequent vector of attack on information resources by attackers of various skill levels. Such attacks can be investigated through analysis of HTTP requests made by the attackers. The possibility of identifying groups of attackers based on the analysis of the payload of HTTP requests marked by IDS as attack events has been studied. The identification of groups of attackers improves the work of security analysts investigating and responding to incidents, reduces the impact of alert fatigue in the analysis of security events, and also helps in identifying attack patterns and resources of intruders. Identification of groups of attackers within the framework of the proposed method is performed based on the sequence of stages. At the first stage, requests are split into tokens by a regular expression based on the features of the HTTP protocol and attacks that are often encountered and detected by intrusion detection systems. Then the tokens are weighted using the TF-IDF method, which allows to further give a greater contribution when comparing requests to the coincidence of rare words. At the next stage the main core of requests is separated based on their distance from the origin. Thus, requests not containing rare words, the coincidence of which allows us to talk about the connectedness of events, are separated. Manhattan distance is used to determine the distance. Finally, clustering is carried out using the DBSCAN method. It is shown that HTTP request payload data can be used to identify groups of attackers. An efficient method of tokenization, weighting and clustering of the considered data is proposed. The use of the DBSCAN method for clustering within the framework of the method is proposed. The homogeneity, completeness and V-measure of clustering obtained by various methods on the CPTC-2018 dataset were evaluated. The proposed method allows obtaining a clustering of events with high homogeneity and sufficient completeness. It is proposed to combine the resulting clustering with clusters obtained by other methods with high clustering homogeneity to obtain a high completeness metric and V-measure while maintaining high homogeneity. The proposed method can be used in the work of security analysts in SOC, CERT and CSIRT, both in defending against intrusions including APT and in collecting data on attackers’ techniques and tactics. The method makes it possible to identify patterns of traces of tools used by attackers, which allows attribution of attacks.
Facial keypoints detection using capsule neural networks
Anton A. Boitsev, Volchek Dmitry G., Magazenkov Egor N., Nevaev Maxim K., Romanov Aleksei A.
The problem of detecting key points of the face is investigated. This problem is quite relevant and important. The existing approaches of solving this problem, which are usually divided into parametric and nonparametric methods, are considered. As a result of the study, it was concluded that, nowadays, the most qualitative results are demonstrated by approaches based on deep learning methods. Two solutions are proposed: a capsule network with dynamic routing and a deep capsule network. The data for the experiments are 10,000 generated faces taken from Kaggle, marked up using MediaPipe. A method of using capsule architectures in neural networks to solve the problem of detecting key points of the face is proposed. The method includes the use of segmentation based on the key points of the face recognized using MediaPipe. Delaunay triangulation was used to build the face mesh. The architecture of a deep capsule network considering semantic segmentation was proposed. Based on the marked-up data, experiments on the detection of key points using the developed capsule neural networks were performed. According to the test results, the loss function reached values in range 2.50–2.90, the accuracy reached values in range 0.87–0.9. The proposed architecture can be used in technologies for comparing the geometry of the face grid of a real person with the geometry of the face grid of a three-dimensional model as well as in further studies of capsule neural networks by researchers in the field of image processing and analysis.
Ensuring the security of critical information infrastructure facilities is an actual developing area of information security both at the national and global level. Categorization of critical infrastructure objects is an integral part of the common and holistic security process. With a dynamically changing threats level, the process of determining the category of an object is still not optimal enough. Based on the existing requirements both of Russian and International standards, the assessment of critical infrastructure facilities not always be carried out promptly and correctly, in addition, numerical  estimates are not formed, the objectivity of the assessment and subsequent reassessment by independent experts is not ensured. This article presents an analysis of the current requirements in the field of categorization of critical infrastructure objects used in the Russian Federation. A comparative analysis of the national regulatory legal acts of the Russian Federation and the system of International standards in the field of IT-security is presented. Regulation of categorization processes of critical infrastructure objects is considered. The necessity of forming numerical values of significance criteria for the correct determination and subsequent independent evaluation (reassessment) of the category of critical infrastructure objects is substantiated. Recommendations for improving the process of categorizing critical infrastructure objects and the formation of numerical estimates are presented. The implementation of the recommendations made will improve the accuracy, objectivity and reliability of the process of creating modern information security systems.
The problem of assessing the security of a network infrastructure is considered. The aim of the work is to formalize a fast computable network security metric intended for use in optimization problems aimed at rebuilding the network according to security requirements. Three metrics with varying degrees of detail are proposed to achieve this goal. To do this, a set of essential features of the network infrastructure has been formed. The level of detail of the metric allows taking into account the terminal access as well as the actual structure of the network path from the subject to the accessobject. The  proposed base metric was compared with previously published metrics by other authors. It is shown that the metric is sensitive to changes in essential network parameters, and the results of its calculation are consistent with the results of calculation of other metrics. Using the metric, the network segmentation method based on the grouping of subjects and objects was evaluated. It is shown that this method can significantly increase the security of the network by combining similar subjects and objects into groups even in the absence of firewall rules. The proposed metrics can be used as a basis for methods of segmenting the network infrastructure and rebuilding the existing network according to security requirements. They do not depend on a subjective assessment, and also do not take into account the presence of known vulnerabilities the closing of which affect security in general, but does not reflect the security of the network interaction. The most significant advantage can be considered as much faster calculation in comparison with analogues.
Kubernetes is a widely adopted open-source platform for managing containerized workloads and deploying applications in a microservices architecture. Despite its popularity, Kubernetes has faced numerous security challenges; deployments using Kubernetes are vulnerable to security risks. The current solutions for detecting anomalous behavior within a Kubernetes cluster lack real-time detection capabilities allowing hackers to exploit vulnerabilities and cause damage to production assets. This study aims to address these security concerns by proposing a new approach and novel agent to feature collection for anomaly detection in Kubernetes environment. It is proposed to use metrics (related to disk usage, CPU and network) collected by node exporter (Prometeus) directly from Kubernetes nodes. The simulation was conducted in a real-world production Kubernetes environment hosted on the Microsoft Azure, with results indicating the agent success in collecting 24 security metrics in a short amount of time. These metrics can be used to create a labeled time-series dataset of anomalies produced by microservices, enabling real-time detection of attacks based on the behavior of compromised nodes within the Kubernetes cluster. The proposed approach and developed agent for monitoring can be used to generate datasets for training anomaly detection models in the Kubernetes environment, based on artificial intelligence technologies, in real-time mode. The obtained results will be useful for researchers and specialists in the field of Kubernetes cybersecurity.
In modern elastic systems, an important task is to predict changes in load processes. Estimating the load change rate helps to adapt the system structure in advance to maintain the quality of user experience. In modern solutions, little attention is paid to the analysis of the load change rate which directly affects how far in advance it is necessary to turn nodes on or off from the computing process. In most cases, these trigger intervals are set to pre-set static values. In order to determine the load process change rate, it is sufficient to solve the linear approximation problem over the interval of increase or decrease in the load function over time. The existing methods of linear approximation do not satisfy all the requirements for the elastic systems environments, which necessitates the development of own approximation method. A simplified linear approximation method ZFLAM is based on the calculation of the center of the initial data set mass as well as the average relative deviation of the ordered points along the ordinate axis from each other. The novelty of the proposed method lies in the simultaneous constant consumption of memory and the absence of operations with quadratic dependencies, which makes it possible to satisfy all the requirements for methods operating in elastic system environments. A two-dimensional plane point generator has been developed which makes it possible to obtain a set of ordered points scattered relative to a given line. The developed generator makes it possible to evaluate the accuracy of the proposed approximation method relative to other methods by calculating the average resulting deviation of the generated points from a given straight line. It was revealed that with a confidence probability of 0.95, with the maximum number of points in the original data set equal to 10,000, the reduction in the approximation execution time due to the developed method reaches 23 %. It was determined that with a confidence probability of 0.95, the value of the mean deviation for both methods in the framework of the experiments is the same. The obtained results can be applied in the elastic systems automatic scaling services in order to reduce the execution time of load processes change rate forecasts. The developed method, in contrast to the least squares method, is free from the disadvantage associated with operations with quadratic dependencies, which makes it possible to use it more widely in the conditions of limited bit grid of some architectures.


Role discovery in node-attributed public transportation networks: the study of Saint Petersburg city open data
Yuri V. Lytkin, Petr V. Chunaev , Timofey A. Gradov, Anton A. Boytsov, Irek A. Saitov
The work presents results of modeling Public Transportation Networks (PTNs) of Saint Petersburg (Russia) and highlights the roles of stations (stops) in this network. PTNs are modeled using a new approach, previously proposed by the authors, based on weighted networks with node attributes. The nodes correspond to stations (stops) of public transport, grouped according to their geospatial location, while the node attributes contain information about social infrastructure around the stations. Weighted links integrate information about the distance and number of transfers in the routes between the stations. The role discovery is carried out by clustering the stations according to their topological and semantic attributes. The paper proposes a software framework for solving the problem of discovering roles in a PTNs. The results of its application are demonstrated on a new set of data about the PTNs of Saint Petersburg (Russia). The significant roles of the nodes of the specified PTNs were discovered in terms of both topological and infrastructural features. The overall effectiveness of the PTNs was assessed. The revealed transportation and infrastructural shortcomings of the PTNs of Saint Petersburg can be considered by the city administration to improve the functioning of these networks in the future.
The possibility of using digital traces of online social network users, using community themes as an example, to support decision-making in career guidance diagnostics is investigated. Statistical analysis was performed: descriptive statistics, z-criterion for comparing two groups, and regression analysis. The themes of users’ subscriptions to various communities available in the social network as well as the gender of the respondent and the number of friends in the social network indicated in the profile were analysed as digital user traces. The socio-professional orientation of the personality was assessed based on the results of the Holland test (edited by G.V. Rezapkina). The correlation between users’ digital traces expressed by the themes of subscriptions, and key indicators of socio-professional orientation reflected in the results of the Holland test was analyzed based on the pilot study conducted through an online social networking application. The statistical analysis confirmed the hypothesis that user interests, in the form of community themes, are related to the results of the Holland test. The hypotheses of existing differences in the groups of men and women in the studied attributes (the results of the Holland test and the leading themes of community subscriptions) were proved. By means of regression analysis among the group of women the correlation was found between the prevalence of the community theme “Education” and the key indicators: A (Artistic), E (Enterprising), I (Intellectual); prevalence of the theme “Lifestyle” and severity of the indicators: C (Conventional), I, A, E; “Mass Media” and indicator C. Among the group of men, a correlation was found between the prevalence of “Sports” subject matter and indicator E. The results of the work expanded the space of potential predictors of users’ vocational orientation. A foundation for large-scale research in quantifying and constructing predictive models of key occupational indicators based on users’ subscription topics has been obtained. The results are useful in the direction of developing an integrated approach to creating a recommendation system for user career guidance.
Blindness detection in diabetic retinopathy using Bayesian variant-based connected component algorithm in Keras and TensorFlow
Anantha Babu Shanmugavel, Murali Subramanian, Vijayan Ellappan, Anand Mahendran , Ramanathan Lakshmanan
The neuro-degenerative eye disease glaucoma is caused by an increase in eye pressure inside the retina. As the second- leading cause of blindness in the world, if an early diagnosis is not obtained, this can cause total blindness. Regarding this fundamental problem, there is a huge need to create a system that can function well without a lot of equipment, highly qualified medical personnel, and takes less time. The proposed modeling consists of three stages: pre-training, fine-tuning and inference. The probabilistic based pixel identification (Bayesian variant) predicts the severity of Diabetic Retinopathy (DR) which is diagnosed by the presence of visual cues, such as abnormal blood vessels, hard exudates, and cotton wool spots. The article combines machine learning, deep learning, and methods for image processing to predict the diagnosis images. The input picture is validated using Bayesian variant connected component architecture, and the brightest spot algorithm is applied to detect the Region of Interest (ROI). Moreover, the training sample calculated optic disc and optic cup are segmented with fundus photography ranges 0 to 4 using VGGNet16 architecture and SMOTE algorithm to detect DR stages of images and the proposed model using ensemble based ResNet with Efficient Net produces the excellent accuracy score of 93 % and predicted image Kappa coefficient (p < 0.01) 0.755 of the fundus retina image dataset.
In this paper, we evaluated the Document Attention Network (DAN), the first end-to-end segmentation-free architecture on Historical Russian Documents. The DAN model jointly recognizes both text and layout from whole documents, it takes whole documents from any size as an input and output the text as well as logical layout tokens. For comparison purposes, we conduct our experiments on Digital Peter dataset as it has been recognized at line-level. Dataset consists of documents of Peter the Great manuscripts; ground truths are represented according to a sophisticated XML schema which enables an accurate detailed definition of layout and text regions. We achieved good results at page-level: 18.71 % for Character Error Rate (CER), 39.7 % for Word Error Rate (WER), 14.11 % For Layout Ordering Error Rate (LOER), and 66.67 % for mean Average Precision (mAP).
Intelligent clinical decision support for small patient datasets
Alexandra S. Vatyan, Golubev Alexander A. , Gusarova Natalya Fedorovna, Dobrenko Natalia V. , Zubanenko Aleksei A. , Kustova Ekaterina S. , Anna A. Tatarinova, Ivan V. Tomilov, Grigory F. Shovkoplias
The ways of substantiating the clinical decision of doctors in the absence of clinical treatment protocols are considered. A comparative evaluation of various statistical methods for ranking clinical symptoms in terms of significance for predicting the outcome of the disease in a small sample of patients with COVID-19 and a history of cardiovascular diseases was performed. The data set (141 patients, 81 factors) was formed based on the materials of electronic medical records of patients of the Federal State Budgetary Institution “National Medical Research Center named after V.A. Almazov”. A subset of controllable risk factors (51 factors) was identified. Descriptive statistics methods (one-way ANOVA, Mann-Whitney and χ² tests) and dimensionality reduction methods (univariate linear regression combined with multiple logistic regression, generalized discriminant analysis, and various decision tree algorithms) were used to rank the factors. To compare the ranking results and evaluate the statistical stability, Kendall’s correlation was used, visualized as a heat map and a positional graph. It has been established that the use of descriptive statistics methods is justified when ranking on a small sample size of patients. It is shown that the ensemble of ranking results may be statistically inconsistent. It is concluded that the positions of the same features obtained by ranking them as part of a complete set and a subset of features do not match; therefore, when choosing a statistical processing method for expert evaluation, one should take into account the meaningful formulation of the problem. It is shown that the statistical stability of ranking under conditions of small samples depends on the number of features taken into account, and this dependence is significantly different for different ranking methods. The proposed method of intellectual support and verification of clinical decisions in terms of choosing the most significant clinical signs can be used to select and justify the tactics of managing patients in the absence of clinical protocols.


The possibilities of increasing the readiness of a redundant computer system for the timely execution of requests critical to service delays are being investigated. A fault-tolerant computer cluster is considered in which nodes are duplicated computing systems that combine computer nodes and memory nodes. Two-stage recovery of memory nodes is assumed: first physical, and then informational, carried out using the resources of computing nodes. The novelty of the approach lies in the fact that for systems with a limitation of the allowable service time of functional requests, the impact of recovery disciplines on the readiness of the system with various options for dividing computing resources to restore information after memory failures and to perform the required functions is evaluated. At the same time, the reliability of the computer systems under study is assessed not only by the probability of their readiness to perform functional tasks (by the readiness coefficient), but also by the probability of the system readiness to perform tasks in a timely manner. Justification of the choice of disciplines for the restoration and maintenance of the flow of functional requests is carried out on the basis of Markov models. At the same time, models are proposed that allow taking into account the impact of the division of computing resources on the joint performance of the required functions and on the information recovery of memory, implemented after its physical recovery. The choice of computer system maintenance disciplines based on the proposed Markov model is aimed at achieving a compromise between the desire to increase the availability factor and the probability of timely execution of the incoming flow of functional requests. The justification of the choice of options for the distribution (separation) of computing resources stored after failures to solve functional queries (required functions) and information recovery of memory, implemented after its physical recovery, is carried out. Based on the proposed Markov models, the dependence of the system readiness for timely execution of requests on the distribution options of computing resources stored in the system for restoring information in memory and for performing functional tasks is investigated. The study was conducted depending on the allowable waiting time for functional requests and the intensity of their traffic. The influence on the system readiness for timely execution of traffic balancing requests of functional tasks between functional computing nodes is analyzed, taking into account the options for their possible joint use for information recovery of memory nodes after their physical recovery. The existence of an optimal share of traffic distribution between computing nodes is shown, taking into account the options for dividing their resources to service functional requests and to restore information in memory nodes after their physical recovery. The results obtained can be used to justify the choice of disciplines for servicing functional requests and recovery after failures of fault-tolerant cluster systems critical to delays in the execution of functional requests.
Long-span shell structures are widely used in various industries. To ensure safe modes of operation, it becomes necessary to develop calculation methods and study shell structures for buckling under the applied load. Traditionally, these data are obtained using analytical and semi-analytical methods. This paper presents a description of the process of determining the critical buckling loads and obtaining the “load-deflection” dependences, taking into account large deformations. For this purpose, a method for analyzing the buckling of orthotropic shell structures based on the functionality of finite element software systems is proposed. The computational model of a cylindrical shell structure is presented based on the finite element method in the ANSYS Mechanical APDL 2020 software package. Computational experiments and a comparison of the buckling of structures made of various materials were carried out: steel S345, plexiglass (PMMA), CFRP M60J/Epoxy, GFRP T-10/UPE22-27. It is shown that the ANSYS Mechanical APDL 2020 software package makes it possible to obtain the necessary data for obtaining the “load-deflection” dependencies. For the analysis of large deformations, it can be used only with a sufficiently detailed description of the calculation parameters and the assumptions made for different materials. The values of the critical uniformly distributed load are obtained. Graphs of the dependence of the deflection on the load are presented. The process of deformation is studied, taking into account the geometric nonlinearity and the self-weight of shell structures. The calculation results can be used to automate the calculations of shell structures as an alternative to analytical methods.
The analysis of handover quality in radio communication networks of high-speed railway transport is given. The parameters of the model that affect the probability of successful handover are considered. An analysis of the possibility of using technologies of public networks LTE and private network p-LTE in radio communication networks of railway  transport is presented. The analysis is based on an analytical method that determines the dependence of the handover quality on the selected frequency range and the number of subcarriers of the OFDM signal. Possible parameters of public communication networks that can be applied in railway transport are considered and analyzed. It is shown that the current frequency ranges and channel parameters of public communication networks of Russian operators give unsatisfactory results for high-speed trains. It has been demonstrated that at train speeds up to 50 m/s (180 km/h), the bandwidth of the LTE signal should be at least 20 MHz for the frequency range of 800 MHz and at least 5 MHz for the frequency range of 450 MHz. The parameters of LTE bands 1800 and 350 MHz, which are allocated for use in railway transport, have been considered and analyzed. It is shown that for high-speed trains with speeds up to 70 m/s (252 km/h), it is necessary to use a range no higher than 350 MHz. The obtained results can be used to substantiate the technical characteristics of the radio communication network on the railway for trains with different speeds.
Comparative performance analysis of DVR & DSTATCOM for distributed generation with gravitational search algorithm
Bhavya Kanchanapalli , Rama Rao Pokanati Veera Venkata , Ravi Srinivas Lanka
The progress in the stream of the power electronic converters has led to the expansion of various protection devices for the distribution system. This also has led to an assortment of flexible transmission devices aiming to enhance the stability of the system throughout a variety of power quality issues and, furthermore, for enabling flexible uninterrupted power transmission during turbulences. This paper augments the employment of two Custom Power Devices, namely, Dynamic Voltage Restorer and Distribution Static Compensator for dealing with various power quality issues associated with distributed generation systems. This paper also involves analysis of performance of proposed Custom Power Devices with various algorithms, like gravitational search algorithm, BAT algorithm and ANT colony optimization algorithm for improving the stability of the power system. The proposed system has been tested with various distributed systems, fault conditions, and assessment has been performed among different algorithms in terms of supply voltage, supply current, active power, reactive power, and power factor. The design and analysis of entire system has been executed using MATLAB/Simulink.
Estimation of the moments of a quantized random variable
Lomakin Mikhail I., Dokukin Alexander V.
A significant part of the research on the problems of quantization of random variables is devoted to practical aspects of optimal quantization in the sense of filling in information. For these purposes, certain quantitative characteristics of quantized random variables are used, such as: mathematical expectation, variance and mean square deviation. At the same time, to determine the quantitative characteristics of quantized random variables, as a rule, well-known parametric distributions are used: uniform, exponential, normal and others. In real situations, it is not possible to identify the initial parametric distribution based on the available statistical information. In this paper, a nonparametric model is proposed for determining such numerical characteristics of a quantized random variable as the highest initial moments. The mathematical formalization of the problem of estimating the higher initial moments of a quantized random variable in the conditions of incomplete data represented by small samples of a quantized random variable is performed in the form of an optimization model of a certain integral of a piecewise continuous function satisfying certain conditions. The final estimates of the highest initial moments of the quantized random variable are found as extreme (lower and upper) estimates of a certain integral on a set of distribution functions with given moments equal to the sample moments of the quantized random variable. A model of the higher initial moments of a quantized random variable is presented in the form of a definite integral of a piecewise continuous function; in the general case, the problem of finding extreme (lower and upper) estimates of the higher initial moments of a quantized random variable on a set of distribution functions with given moments is solved. Examples of finding higher initial moments and optimal quantization of a random variable are given. The obtained results can be used by specialists in evaluating and optimizing the quantization of various information presented by random signals.
Existing methods and equipment for determining the dynamic characteristics of devices and systems are considered. A new method for estimating the dynamic error of navigation devices is proposed. It makes it possible to simplify the experimental assessment of the dynamic error of serial products as well as to evaluate its value under conditions of real disturbing influences corresponding to operating conditions using test equipment. The method is based on the measurement by the device of pseudo-random test effects reproduced by the stand in a given frequency spectrum corresponding to the operating conditions of the instrument. In this case, the variance of the dynamic error of the device under study is determined as the area under the graph of its power spectral density. To implement the method, it is proposed to use a specialized stand that allows to reproduce oscillations in a given frequency spectrum. The results of the application of the developed method in solving the problems of estimating the dynamic errors of the electronic inclinometer are presented. The results of experimental studies are consistent with the results of field tests obtained earlier. The developed method makes it possible to reduce the time for estimating the dynamic error of sensors and devices to 15–20 minutes because it does not require measurements at each frequency separately, and also allows to evaluate the error of devices in real modes of its operation.
Common practice methods of tank design for transportation of liquefied natural gas don’t take into account the specifics of the gas carriers operation under the condition of partial filling of cryogenic tanks. A new method for designing of type-C tank is proposed. Method is based on solving the problem of increasing the volume of transported liquefied natural gas by small-scale inland carriers. The method is based on usage of a number of limiting parameters: minimal allowable ventless operation time, allowable values of the ship’s draft, and the actual duration of voyages between neighboring consumers. The method allows optimizing type, shape, wall thickness, and heat insulation thickness of cryogenic tank. The proposed method is aimed at enlargement of usage of the ship’s hull dimensions. This is achieved by changing the diameter, the distance between centers of the bi-lobe tank, the thickness of the insulation, and the maximum allowable working pressure. An increase in the volume of the tank is achieved by coordination such parameters as the maximum allowable draft of the vessel, the minimum time of ventless storage, and the time of ventless operation under partial filling conditions. The calculation of the ventless operation time is determined by the operating conditions of type-C tanks. The calculation of the heat ingress into the tank takes into account the contact area of liquefied gas and its vapors with the metal wall of the tank. The calculations do not take into account the assumption of thermal equilibrium between the liquid and vapor fractions, which leads to the need to take into account heat transfer from vapor to liquid. The implementation of the method is shown on the example of the modeling of the two-way river-sea type vessel. It is shown that optimization of tank parameters in accordance with proposed criteria can lead to an increase in the volume of transported natural gas by more than 4 %. The method can be used in the development of new and modernization of existing vessel projects to transportation of liquefied natural gas operating in water basins of Lena and Yenisei rivers in the East Siberian region. The described method can also be used in the design of road and rail tanks as well as small- scale bullet tanks for liquefied natural gas.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.