Summaries of the Issue


Polymer composition with phenanthrenequinone for recording relief holographic gratings
Uladzimir V. Mahilny, Edhar A. Khramtsou, Alexei P. Shkadarevich
The formation of periodic reliefs of the thickness of photosensitive polymer layers after recording holographic gratings in them and stimulating material deformations by reversible plasticization in a non-solvent liquid is considered. The phenomenon was studied for the composition of a copolymer with side anthracene groups — phenanthrenquinone. Phenantrenquinone transfers the energy of electronic excitation to oxygen molecules entering through the open surface of the polymer layer which then cause the oxidation of anthracene fragments. Holographic gratings with a period of 2–5 μm were recorded by laser radiation at a wavelength of 532 nm in layers about 1 μm thick. The photoreliefs were formed during the subsequent swelling of the layer in the medium of a hydrocarbon developer. The photosensitized oxidation of the anthracene groups of the new polymer under the action of optical radiation in the spectral range 408–532 nm was studied using the electron absorption spectra. It is shown that approaching the excitation wavelength to the long-wavelength absorption maximum of phenanthrenquinone (410 nm) makes it possible to increase the sensitivity of the material layer by a factor of 15 compared to layers with methylene blue as a photosensitizer. It has been experimentally established that the amplitude of weak periodic reliefs of a deformation nature (height less than 0.01 μm), which appear immediately after the recording of holographic gratings, increases many times during the treatment of the layer with liquid hydrocarbon. Their maximum value reaches 25 % of the thickness of the recording layer. Presumably, the deformation of the inhomogeneously irradiated layer is stimulated by the transfer of the polymer material into a highly elastic state during its swelling. Photoreliefs are stable after drying. Their strength can be increased by photocrosslinking the material as a result of photodimerization of residual anthracene groups under uniform irradiation with light at a wavelength of 365 nm. The non-sinusoidality of the photorelief reduces the diffraction efficiency achievable with total reflection to values less than 0.20. The studied polymer composition can be used to form relief-phase diffractive optical elements by radiation in the blue-green region of the spectrum, provided by a number of high-power laser sources.
Modern approaches to the application of mathematical modeling methods in biomedical research
Красников И. В., Alexey Yu. Seteikin, Bernhard Roth
This paper presents a brief overview of the main approaches to mathematical modeling of the interaction of optical radiation with biological tissues. In the case of light propagation in tissue, the Monte Carlo method is an approximation of the solution of the radiation transfer equation. This is done by sampling the set of all possible trajectories of light quanta (photon packets) as they pass through the tissue. Such a stochastic model makes it possible to simulate the propagation of light in a turbid (scattering) medium. The main types of interaction between photons and tissue are considered: scattering, absorption, and reflection/refraction at the boundary of the medium. The algorithm of the method is based on the statistical approximation of the estimated parameters instead of using non-linear functional transformations. Efficient methods for modeling the problem of Raman spectroscopy in turbid media are shown, taking into account the parameters of the detector and the sample size. Two fundamental approaches to the numerical simulation of Raman scattering are considered. Based on data from open literary sources, a variant of modeling Raman scattering in normal multilayer human skin in the near infrared wavelength range is shown. The Raman spectra of ex vivo normal skin tissue sections are presented to quantify various intrinsic micro spectral properties of different skin layers. The reconstructed Raman spectrum of the skin is compared with clinically measured skin spectra in vivo. The overall good agreement between the simulated process and experimental data is shown. The possibility of using the sequential Monte Carlo method for data processing in correlation wide-field optical coherence tomography for the study of biological objects is shown.
The results of measuring the surface depth of the test object using digital holography are presented. The resulting image was compared to a model based on the calibration slide documentation. In the presented holographic microscope, instead of an eyepiece, a lens with a geometric phase effect is used, which converts a beam with linear polarization into a pair of beams with circular polarizations (diverging and converging). The parallel phase shift method was used to obtain phase distribution. Using a polarization camera, four interferograms corresponding to four different linear projections of interfering waves with right and left circular polarizations were recorded in one exposure. Holograms of a phase object-micrometer were obtained, according to which, by the method of parallel phase shift, the distribution of phase lag introduced by the object was restored. To correct the aberration, subtraction of the recorded phase raid of the illuminating wave — the experimentally obtained phase of the wavefront without an object is used. The developed digital holographic phase microscope based on a geometric phase lens and a polarization camera makes it possible to correctly visualize the surface relief profile. The microscope can be used as a tool for monitoring the state of biological objects exposed to external effects.


The paper presents studies of the color separation system based on the developed color triangle for conducting scientific research in microscopy which will allow identifying genetic or chemical deviations of the samples under study by an accurate change in color. The color triangle covers the entire visible range and is focused on the physiological RGB and XYZ colorimetric systems. Based on the method of converting color spaces, the addition curves of the developed systems were found. Based on the curves, sets of color separating light filters were selected to fit the shapes of these curves based on the selected monochrome camera. Three sets are presented. An analytical study of these sets was carried out and one optimal set was selected. An analytical study of this system is presented in the form of mathematical modeling with 14 control colors from the Munsell atlas. The selected set of the system was experimentally studied on the developed optoelectronic setup placed in a black box to exclude light and color flare. One important part of the setup is the reflective screen: the location follows the lighting/observation recommendations of the International Commission on Illumination for colorimetric measurement of samples. For an objective analysis of measurements, reference test objects were selected — standardized colored optical glasses. The study was based on the evaluation of glass groups: yellow, yellow-green, green, blue-green, since the work has expanded the color space in the direction of the selected colors to obtain color accuracy. Previously, the author, in an analytical study of modern color separation systems, obtained results where the best value was found with a wide color triangle of 0.009, the worst 0.04 — with a small one. Thus, it has been proven: the larger the coverage of the color triangle, the smaller the change in color. The obtained values of the developed KZS system are better than modern ones –0.0088 on average. During the mathematical modeling of the experiment, the change in color was obtained 0.016 on average, the practical result — 0.027 on average. The obtained parameters and characteristics will be taken into account when introducing the developed color separation system into a monochrome digital microscope to improve color rendering in microscopy.
The article proposes the concept of obtaining images on the basis of which it is possible to create three-dimensional models of objects. In particular, images contain the data necessary to recreate the three-dimensional form of an object. Such data include: the length, width of the modeled object as well as the lengthening of the shadow cast on the base surface. In accordance with the proposed concept, an important condition for obtaining images suitable for interpretation is the use of spatially separated equipment generating and recording optical radiation. The orientation parameters of the illumination source and the camera are selected, taking into account the requirements for the photogrammetric quality of images. A characteristic of geometric distortions that occur when changing the shooting mode from plan to perspective is presented, which demonstrates the change in the aspect ratio of the image depending on the angle of the camera. A characteristic of the shadow elongation depending on the spatial position of the optical radiation source is presented, which shows the influence of the orientation parameters of the optical illumination means on the length of the shadow cast on the base surface. On the basis of these characteristics, the choice of parameters of relative orientation in space of the source of optical illumination and the equipment that detects optical radiation is substantiated. Predicting the value of geometric distortions of images at the stage of choosing the parameters of relative orientation of the equipment of a two-element active optoelectronic complex allows you to save the photogrammetric quality of images and, as a result, measure the length and width of an object. Predicting the magnitude of the shadow elongation under conditions of artificial optical illumination provides the possibility of transferring features in the image for calculating the applicate and, as a result, recreating the three-dimensional shape of the object. The proposed concept of image registration finds application in topographic and geodetic and engineering work in conditions of insufficient natural light. For example, the use of a two-element active optical-electronic complex makes it possible to obtain three-dimensional photo plans of the terrain of geographic regions with a short daylight hours.


Variational problem of adaptive optimal control. Theoretical and applied computer analysis
Vedyakov Alexei A., Ekaterina V. Milovanovich, Slita Olga V., Tertychny-Dauri Vladimir Yu.
The problem of adaptive optimal control of a dynamical system, which belongs to the class of conditional variational problems with moving boundaries, is considered. A variational and computer study of the controlled adaptive motion of a material point is carried out for the problem of the energy quality functional minimizing with a moving, not predetermined right transboundary and in the case when the mass of the point changes depending on the unfixed final time. The problem is solved using the schemes and procedures of the classical calculus of variations, as well as adaptive estimation techniques, including the derivation of the variation of the auxiliary quality functional, the corresponding Euler equations, and the adaptive estimation algorithm. When solving a general conditional variational problem, the obtained closed system of differential equations was studied for the formation of an adaptive optimal control system for a dynamic plant with a given performance functional. The results of the unconditional formulation of the problem are generalized to the case of additional differential (nonholonomic) and holonomic constraints. In a variational adaptive optimal control problem, the transversality condition is formulated in terms of the local programming condition. The developed variational scheme of adaptive optimal synthesis can be used in the calculation and design of controlled dynamic systems. This optimization scheme is also promising for use in systems where operating time is non-fixed in advance. The results achieved in this paper concern obtaining specific equations, expressions, and formulas relative to the model example under study and finding graphs of the main time functions that determine the nature of the movement of the control object and the quality of the corresponding transients. The proposed adaptive optimal control algorithms for purposeful movement of the studied material point were tested in digital mode and showed their effectiveness which makes them promising for further use in more complex nonlinear adaptive systems of dynamic optimal control.
The development issues of theories of robustness, roughness and bifurcations of dynamic systems are considered. In the modern theory of dynamic systems and automatic control systems, researches of the properties of roughness and robustness of systems are becoming more and more important. The work considers methods of research and ensuring robust stability of interval dynamic systems of both algebraic and frequency directions of robust stability. The main results of the original algebraic method of robust stability for continuous and discrete time are given. In the frequency direction of robust stability, the issues of a frequency-robust method to the analysis and synthesis of robust multidimensional control systems based on the use of the frequency condition number of the transfer matrix of the “input-output” ratio are considered. The main provisions of the theory and method of topological roughness of dynamic systems based on the concept of roughness according to Andronov-Pontryagin are presented with the introduction of a measure of roughness of systems in the form of a condition number of matrices of reduction to a diagonal (quasi-diagonal) basis at special points of phase space. Criteria for dynamic systems bifurcations are formulated. Applications of the topological roughness method to synergetic systems and chaos have been used to investigate many systems, such as Lorenz and Rössler attractors, Belousov-Jabotinsky, Chua systems, “predator-prey” and “predator-prey-food”, Hopf bifurcation, Schumpeter and Caldor economic systems, Henon mapping, and others. For research of weakly formalized and non-formalized systems, the use of the approach of analogies of theoretical-multiple topology and the abstract method to such systems is proposed. Further research suggests the development of roughness and bifurcation theories for complex nonlinear dynamical systems.


Multiple context-free path querying by matrix multiplication
Ilya V. Epelbaum, Rustam Sh. Azimov, Semyon V. Grigorev
Many graph analysis problems can be formulated as formal language-constrained path querying problems where the formal languages are used as constraints for navigational path queries. Recently, the context-free language (CFL) reachability formulation has become very popular and can be used in many areas, for example, querying graph databases, Resource Description Framework (RDF) analysis. However, the generative capacity of context-free grammars (CFGs) is too weak to generate some complex queries, for example, from natural languages, and the various extensions of CFGs have been proposed. Multiple context-free grammar (MCFG) is one of such extensions of CFGs. Despite the fact that, to the best of our knowledge, there is no algorithm for MCFL-reachability, this problem is known to be decidable. This paper is devoted to developing the first such algorithm for the MCFL-reachability problem. The essence of the proposed algorithm is to use a set of Boolean matrices and operations on them to find paths in a graph that satisfy the given constraints. The main operation here is Boolean matrix multiplication. As a result, the algorithm returns a set of matrices containing all information needed to solve the MCFL-reachability problem. The presented algorithm is implemented in Python using GraphBLAS API. An analysis of real RDF data and synthetic graphs for some MCFLs is performed. The study showed that using a sparse format for matrix storage and parallel computing for graphs with tens of thousands of edges the analysis time can be 10–20 minutes. The result of the analysis provides tens of millions of reachable vertex pairs. The proposed algorithm can be applied in problems of static code analysis, bioinformatics, network analysis, as well as in graph databases when a path query cannot be expressed using context-free grammars. The provided algorithm is linear algebra-based, hence, it allows one to use high-performance libraries and utilize modern parallel hardware.
We investigated the possibility of automating the prediction of the 16-factor personality traits by R. Cattell from text posts of social media users. The proposed new method of automating the evaluation of R. Kettell’s 16-factor personality test traits includes language models and neural networks. Implementation of the method involves several steps. At the first step text posts are extracted from user accounts of social media, pre-processed with language model RuBERT and previously trained over a full-connected neural network. The result of this step is a normalized empirical distribution of the posts by the previously introduced classes for each user. Subsequently, based on the distribution of user posts the evaluation of the expression of psychological features of the user is made with the help of support vector machine, random forest and Naive Bayesian classifier. The final data set for model building and further testing their performance was made up of 183 respondents who took the R. Cattell test, with links to their public social media accounts. Classifiers predicting results for six factors (A, B, F, I, N, Q1) of R. Cattells 16-factor personality test were constructed. The results can be used to create a prototype of automated system for predicting the severity of psychological features of social media users. Results of work are useful in the applied and research systems connected with marketing, psychology and sociology, and also in the field of protection of users from social engineering attacks.
Electric power system is a complex organizational structure that provides working interaction for its constituent intelligent electronic devices by defining their roles, communication channels, and powers. The control system of a modern electric power system must ensure the coordination of operation of intelligent electronic devices at the technological stages of power generation, transport, distribution, and consumption. The disadvantage of existing process control systems in electric power systems is the use of a hierarchical control structure in relation to the network topology. This fact leads to conflicts of resources and processes at the stages of generation, transport, distribution, and consumption of electricity. Uncoordinated operation of control devices leads to a decrease in the efficiency of power facilities which negatively affects the quality of electricity in the power supply network. To synchronize the work of intelligent electronic devices distributed over the network, it is proposed to provide their joint work through a single information center in a digital environment. At the same time, it is proposed to control the modes of operation of the power supply network using digital twins of its components. Digital twins of electric power system objects control power quality indicators, simulate the modes of interacting devices in a digital environment, and perform control of power supply network components to ensure a rational mode of their operation. To achieve the universality and speed of the control system it is proposed to use the apparatus of fuzzy artificial neural networks, and for better prediction of power quality indicators in the network — ensembles of artificial neural networks. A methodology for controlling the quality of electricity at sections of the electricity distribution network was developed using digital twins that ensure the relationship between the monitored indicators of electricity quality and regulated values of the actuators of intelligent electronic devices.
In the modern educational process, there is a need to automate response assessment systems. The task of the reviewer becomes more difficult when analyzing theoretical answers, because online assessment of answers is available only for questions with multiple choice answers. The teacher carefully examines the answer before giving the appropriate mark. The existing approach requires additional staff and time to study the responses. This article introduces a natural language processing and machine learning response-based app that includes a voice prompt for visually impaired students. The application automates the process of checking subjective responses by considering text extraction, feature extraction, and score classification. Evaluation measures, such as Term Frequency-Inverse Document Frequency (TF-IDF) similarity, vector similarity, keyword similarity, and grammar similarity, are considered to determine the overall similarity between teacher outcome and system evaluation. The conducted experiments showed that the system evaluates the answers with an accuracy of 95 %. The proposed methodology is designed to assess the results of exams for students who cannot write but who can speak. The application of the developed application allows reducing the labor costs and time of the teacher by reducing manual labor.
Natural language based malicious domain detection using machine learning and deep learning 
Abdul Samad Saleem Raja, Ganesan Pradeepa, Somasundaram Mahalakshmi, Manickam Sam Jayakumar
Cyberattacks are still challenging since they are increasing day by day. Cybercriminals employ a variety of strategies to manipulate and exploit their targets vulnerabilities. Malicious URLs are one such strategy which is used to target large groups on various social media platforms. To draw internet users, these web addresses are disguised as being safe. Deliberate or inadvertent use of such URLs exposes the user or the organization in the cyberspace and opens the way for further attacks. Systems that use rules-based or machine learning algorithms to find malicious URLs usually rely on feature engineering. This requires domain expertise and experience. Sometimes, even after extracting features from a dataset, it may not completely leverage the potential of the dataset. The proposed method employs Natural Language Processing (NLP) approaches to vectorize the words in the URLs and applies machine learning and deep learning models for classification. Vectorization technique in NLP reduces the effort of feature engineering and maximizing the use of the dataset. For the experiment, two separate datasets are used. To vectorize the URL text, three different vectorization methods are used. To evaluate the performance of the proposed method, two different datasets (D1 and D2) that are regularly utilized in the research domain were used. The results demonstrate that the superior accuracy of 92.4 % with the D1 dataset is achieved by the Decision Tree (DT) with count vectorizer and the Random Forest (RF) with Term Frequency-Inverse Document Frequency (TF-IDF) vectorizer. With the D2 dataset, DT with TF-IDF vectorizer obtains a greater accuracy of 99.5 %. The Artificial Neural Network (ANN) model achieves 89.6 % accuracy with the D1 dataset and 99.2 % accuracy with the D2 dataset.
Hybrid JAYA algorithm for workflow scheduling in cloud
Sandeep Kumar Bothra , Sunita Singhal , Hemlata Goyal
Workflow scheduling and resource provisioning are two of the most critical issues in cloud computing. Developing an optimal workflow scheduling strategy in the heterogeneous cloud environment is extremely difficult due to its NP-complete nature. Various optimization algorithms have been used to schedule the workflow so that users can receive Quality of Service (QoS) from cloud service providers as well as service providers can achieve maximum gain but there is no such model that can simultaneously minimize execution time and cost while balancing the load among virtual machines in a heterogeneous environment using JAYA approach. In this article, we employed the hybrid JAYA algorithm to minimize the computation cost and completion time during workflow scheduling. We considered the heterogeneous cloud computing environment and made an effort to evenly distribute the load among the virtual machines. To achieve our goals, we used the Task Duplication Heterogeneous Earliest Finish Time (HEFT-TD) and Predict Earliest Finish Time (PEFT). The makespan is greatly shortened by HEFT-TD which is based on the Optimistic Cost Table. We used a greedy technique to distribute the workload among Virtual Machines (VMs) in a heterogeneous environment. Greedy approach assigns the upcoming task to a VM which have lowest load. In addition, we also considered performance variation, termination delay, and booting time of virtual machines to achieve our objectives in our proposed model. We used Montage, LIGO, Cybershake, and Epigenomics datasets to experimentally analyze the suggested model in order to validate the concept. Our meticulous experiments show that our hybrid approach outperforms other recent algorithms in minimizing the execution cost and makespan, such as the Cost Effective Genetic Algorithm (CEGA), Cost-effective Load-balanced Genetic Algorithm (CLGA), Cost effective Hybrid Genetic Algorithm (CHGA), and Artificial Bee Colony Algorithm (ABC).


Information model of the essential goods purchase duration
Yuliya M. Khlyupina, Denis A. Kuznetsov, Andrey A. Laptev
The task of reducing the time for the purchase of essential goods is especially relevant in cases of shortage of free time of buyers. To do this, it is necessary to predict and estimate the time required to purchase goods. Traditional approaches based on cartographic systems do not provide estimates and forecasts, but only allow you to build a route to the right place based on an assessment of the traffic situation. For this reason, the problem of developing a more modern model is relevant, taking into account such factors as the infrastructural location of the store, user evaluation, and the workload of the store. The paper proposes an information model that includes such time costs of the buyer as the search for goods, the route to the place of sale of goods, the purchase of goods. The time spent on the purchase of goods is described using elements of queuing theory. Statistical and direct methods for assessing the workload and queues in the store are highlighted. The developed generalized model contains the parameters necessary to estimate the required time using statistical methods which include traffic forecasting based on user ratings and reviews, analysis of the infrastructure location and public video surveillance cameras, public Application Programming Interface of stores, and Internet services. Correction coefficients have been introduced to adjust the estimation of model parameters depending on the infrastructure location of the store and user ratings. A new information model has been formulated that allows taking into account the dependence of the time required to purchase emergency goods on the workload of the store, its infrastructure location, ratings and user reviews. The simulation model is developed in the AnyLogic environment. An example of using the model to estimate the average time spent on the purchase of emergency goods is demonstrated. The simulation results are consistent with the conducted experiment in which purchases of emergency goods were made in various stores in Saint Petersburg. The developed model can be used when searching for the optimal route to the place of sale of essential goods when planning the construction of stores as well as in the areas of marketing and delivery of goods.
Existing solutions for patients support in mobile apps do not allow customization of the user interface to the needs of a particular user. It reduces the involvement of patients in the process of using the system. The lack of information leads to a decrease in the quality of treatment and the emergence of potential complications. The paper proposes a variant of a new interactive mobile patient support system. This technology allows patients to enter data about their health into a mobile application and track the dynamics in time, and doctors can monitor the course of treatment remotely. Models for tracking user engagement, such as the Cox proportional hazards model and the random effects model, are considered and demonstrated. The use of A/B testing to improve user experience is analyzed. The architecture of the mobile application, web application, and their interaction was developed and implemented. Risk assessment models for patients with chronic diseases have been built. The work of interactive user support technology within a single interactive system is shown. The developed approaches can be used to build a wide range of telemedicine solutions that support interaction with both medical specialists and patients within the framework of the 4P approach in medicine.
Role discovery in node-attributed public transportation networks: the model description
Yuri V. Lytkin, Petr V. Chunaev , Timofey A. Gradov, Anton A. Boytsov, Irek A. Saitov
Modeling public transport systems from the standpoint of the theory of complex networks is of great importance to improve their efficiency and reliability. An important task here is to analyze the roles of nodes and weighted links in the network, respectively modeling groups of public transport stops and their linking routes. In previous works, this problem was solved based on only topological and geospatial information about the presence of routes between stops and their geographical location which led to the problem of uninterpretability of the discovered roles. In this article, to solve the problem, the model additionally considers information about the social infrastructure around the stops and discovers topological, geospatial, and infrastructure roles jointly. The public transport system is modeled using a special weighted network — with node attributes where nodes are non-overlapping groups of stops united by geospatial location, node attributes are vectors containing information about the social infrastructure around stops, and weighted links integrate information about the distance and number of transfers in routes between stops. To identify the model, it is sufficient to use only open urban data on the public transport system. Role discovery for stops is carried out by clustering network nodes in accordance with their topological and attributive features. An extended model of the public transport system and a new approach to solving the problem of discovering the roles of stops, providing interpretability from the topological, geospatial and infrastructural points of view, are proposed. The model was identified on the open data of Saint Petersburg about metro stations, trolleybus and bus stops as well as organizations and enterprises around the stations and stops. Based on the data, balanced parameters for grouping stops, assigning link weights and constructing attribute vectors are found for further use in the role discovery task. The results of the study can be used to identify transport and infrastructure shortcomings of real public transport systems which should be considered to improve the functioning of these systems in the future.
Currently, most IT organizations are inclined towards a cloud computing environment because of its distributed and scalable nature. However, its flexible and open architecture is receiving lots of attention from potential intruders for cyber threats. Here, Intrusion Detection System (IDS) plays a significant role in monitoring malicious activities in cloud-based systems. The state of the art of this paper is to systematically review the existing methods for detecting intrusions based upon various techniques, such as data mining, machine learning, and deep learning methods. Recently, deep learning techniques have gained momentum in the intrusion detection domain, and several IDS approaches are provided in the literature using various deep learning techniques to deal with privacy concerns and security threats. For this purpose, the article focuses on the deep IDS approaches and investigates how deep learning networks are employed by different approaches in various steps of the intrusion detection process to achieve better results. Then, it provided a comparison of the deep learning approaches and the shallow machine learning methods. Also, it describes datasets that are most used in IDS.
Monitoring the health status of the population by age groups
Nikolay A. Ignatev, Mekhrbonu A. Rakhimova
A multi-criteria method for selecting informative sets of different features for a quantitative assessment of the population’s health status in 14 age groups is considered. To compare samples from two classes (groups), it is proposed to form a unified description of objects according to two gradations of nominal features. The unified description is used to synthesize latent features and calculate the values of the compactness measure of class objects on the numerical axis. The transformation of quantitative features into nominal gradations is implemented according to the search criterion for the minimum coverage of their values by non-overlapping intervals. The values of the boundaries of the intervals and their number are determined by a recursive algorithm considering the objects belonging to classes. An important property of the transformation is the invariance to measurement scales. A formula is proposed for calculating the membership function of class objects for each feature gradation. Function values are used to unify object descriptions and calculate the stability index of a feature, regardless of its measurement scale. The unification of descriptions by two gradations does not change the stability index but increases the contribution of each gradation to the separation of class objects. The ranking of features about their stability was used both for individual samples and for a set of defined samples. The results of ranking over a set of samples were used to search for patterns in individual features and to form sets from them to calculate the values of latent features of objects. A set of thirteen data samples from representatives of two classes was formed as follows. The first class was represented by objects of the younger age group, and the second class — by objects of different age groups. A set of seven different types of features has been identified. For each of 13 samples, the values of latent features on this set and measures of compactness of class objects on the numerical axis were calculated. A monotonically non-decreasing sequence of values of measures of compactness of data samples that are invariant to the order of precedence of age groups is obtained. The property of monotonicity of sequence values is consistent with empirical estimates of the health state in the process of population aging.
An intelligent shell game optimization based energy consumption analytics model for smart metering data
Ramalingam Saravanan, Arulnanthisivam Swaminathan, Sankaralingam Balaji
Smart metering is a hot research topic and has gained significant attention since the electromechanical metering is not reliable and requires more energy and time. All the existing methods are focused only on how to deal with data rather than how to do efficiently. Prediction of electricity consumption is essential to gain intelligence to the smart gird. Precise electricity prediction allows a service provided in resource planning and also controlling actions for the demand and supply balancing. The users are beneficial from the smart metering solution by effective interpretation of their energy utilization, and labelling them to efficiently handle the utilization cost. With this motivation, the paper presents intelligent energy consumption analytics using smart metering data (ECA-SMD) model to determine the utilization of energy. The presented ECA-SMD model involves three major processes namely data pre-processing, feature extraction, classification, and parameter optimization. The presented ECA-SMD model uses Extreme Learning Machine (ELM) based classification to determine the optimum class labels. Besides, shell game optimization (SGO) algorithm is applied for tuning the parameters involved in the ELM and boosts the classification efficiency. The efficacy of the ECA-SMD model is validated using an extensive set of smart metering data and the results are investigated based on accuracy and mean square error (MSE). The proposed model exhibited supremacy with the maximum accuracy of 65.917 % and minimum MSE of 0.096.


Active voltage damping method with negative DC link current feedback in electric and hybrid electric transmissions
Evegniy O. Stolyarov, Maria A. Gulyaeva, Alecksey S. Anuchin , Alexandr A. Zharkov, Maxim M. Lashkevich, Dmitry I. Aliamkin
Electric and hybrid electric transmissions in traction drive have a limited capacity power source. Since the traction drive operates in the torque source mode, the DC link voltage becomes unstable and goes into oscillatory mode. This leads to the software protection reaction which prevents the traction inverter overvoltage breakdown. The transition boundary to the oscillatory mode is determined by the power and the value of the capacitance installed in the electric transmission DC link. To increase reliability of the traction inverters, large-capacity electrolytic capacitors are replaced with small-capacity film capacitors which makes the system more prone to oscillations. To solve this problem, active damping methods are used allowing changing the engine dynamic characteristics by means of the control system. The software methods with power and torque proportional control are most widely used. Proportional power control is the simplest method in which the traction drive simulates an RL load. The torque proportional control method adjusts the torque reference according to the change in the DC link voltage. This paper proposes a new negative DC link feedback method. In this case, the torque is adjusted dynamically depending on the current consumed by the traction inverter from the electric transmission common DC link. Mathematical modeling methods were used to compare the known and proposed methods of DC link voltage active damping. Mathematical models have been developed in the MATLAB Simulink environment which makes it possible to investigate the damping capacity at various values of the power consumed by the traction inverter. It is shown that the proposed method with negative DC link current feedback demonstrated tuning simplicity. In comparison with proportional power and torque control methods, the proposed option is robust when setting parameters, provides a large damping coefficient over the entire range of traction drive power, and has a short duration of the transient process. The proposed method can be used to suppress DC link voltage oscillations on any type of hybrid electric and all-electric vehicles traction inverters and ensures stable and reliable equipment operation.
Comparative analysis of switched reluctance motor control algorithms
Galina L. Demidova, Yan D. Derbikov, Fedor S. Petrikov, Dmitry V. Lukichev , Strzelecki Ryszard , Alecksey S. Anuchin
Nowadays it has become possible to develop inexpensive modern control systems for nonlinear complexity electromechanical objects due to the development of microprocessor technology and power electronics. Switched reluctance electric machines are among these devices. It makes it possible to widely use such electric machines in various practical implementations, in particular, in traction drives, electric drives of oil and gas drilling rigs, and in other applications. The switched reluctance electric machine is a non-linear object, and its control methods require formalization and grouping. The manuscript considers the design and functional features of switched reluctance electrical machines. The main methods of controlling these electrical machine types are given. Comparative analysis of the most known methods is carried out. The main classical methods of switched reluctance electric machine control are considered, such as a relay current controller with a limitation, the method of controlling the turn on/off angles and controlling the DC link voltage. Transient responses in the electric drive system are demonstrated using the considered methods. It is shown that by adjusting the on/off angles, it is possible to reduce the torque oscillation coefficient. The identified features of the presented methods will make it possible to simplify and reduce the development time for an effective control system for switched reluctance electrical machines as well as to reduce the torque ripple.
Gas dynamics of stationary supersonic gas jets with inert particles exhausting into a medium with low pressure
Daniil O. Bogdanuk, Konstantin N. Volkov, Vladislav N. Emelyanov, Alexander V. Pustovalov
 Issues related to the development of tools for mathematical modeling of stationary supersonic flows of an ideal compressible gas with inert particles are considered. A mathematical model is constructed that describes the flow of an inviscid compressible gas with inert particles in a jet flowing from an axisymmetric nozzle into a flooded space. Provided that the flow is supersonic along one of the spatial coordinates, the Euler equations are hyperbolic along this coordinate. For numerical calculations of the gas flow field, the finite volume method and the marching method are used. For integration over the marching direction, the three-step Runge–Kutta scheme is used. The procedure for calculating the flows includes the reconstruction of the values of the desired functions on the faces of the control volumes from the average values over the control volumes and the solution of the problem of the decay of an arbitrary discontinuity (the Riemann problem). The Lagrangian method of test particles is used to describe the dispersed phase. The effects of the reverse influence of particles on the flow of the carrier gas are not taken into account. The effects of viscosity and rarefaction of the gas flow are taken into account only when the gas interacts with particles. Calculation of the trajectories of inert particles is carried out in a known flow field of the carrier gas. The motion trajectories of discrete inclusions in jet flows with strong underexpansion are presented. The influence of the particle size and the coordinates of the particle entry point into the flow on the features of their transfer by the jet stream are discussed. Efficient means of numerical simulation of stationary supersonic flows of an ideal compressible gas with particles in nozzles and jets have been developed. The calculation results are of interest for studying supersonic gas suspension flows around bodies and for calculating oblique shock waves.
Mixed forms of free oscillations of a rectangular CFCF-plate
Mikhail V. Sukhoterin, Elena I. Rasputina, Natalya F. Pizhurina
Mixed (symmetrically/antisymmetric) forms of natural oscillations of a thin rectangular plate of constant thickness, in which two parallel sides are rigidly pinched, and the other two are free (CFCF plate, C — clamped, F — free), are studied. When all the conditions of the boundary value problem are satisfied, a resolving infinite homogeneous system of linear algebraic equations with respect to unknown coefficients of the series is obtained using two hyperbolo-trigonometric series of the coordinate deflection function. Even functions on one coordinate and odd functions on another coordinate were used to obtain symmetric-antisymmetric waveforms. As a parameter, the resulting system contains the relative frequency of free oscillations. Nontrivial solutions of the reduced system were found by the method of successive approximations in combination with a search of the frequency parameter. Numerical results are obtained for the spectrum of the first six mixed (symmetric/antisymmetric — S-A and A-S) forms of free oscillations of a thin square CFCF plate of constant thickness. The natural frequencies were compared with the results of other authors and with known experimental values. The influence on the accuracy of the results of the number of members held in rows (the size of the reduced system) and the number of iterations is investigated. 3-D images of the found waveforms are presented. The results obtained can be used in the design of various sensors and sensors using the resonance phenomenon.
The introduction of new types of heat exchangers with phase transitions and the solution of problems of optimizing the design and operational characteristics are a priority within the framework of the energy saving program. Known methods for calculating the heat-hydrodynamic parameters of the flow of refrigerants often do not take into account the specifics of boiling processes at low temperatures as well as in channels with a small flow area. This paper presents the results of modeling heat transfer during the boiling of refrigerants in the channels of evaporators of heat and cold energy complexes, taking into account the true flow parameters. The proposed mathematical model of the boiling of the working substance in channels of various shapes is based on the true flow parameters which imply knowledge of the channel cross-sectional areas occupied by each of the phases. The value of the true volumetric steam content provides the most correct modeling of two-phase flows in a wide range of regime and geometric parameters. The paper uses the equations of material and heat balance in combination with the equation of heat transfer from the environment to the boiling refrigerant. The map of flow regimes is used as an empirical component. A program has been developed for calculating the proposed system of equations which is solved iteratively at each time step using the finite volume method. Comparison of calculation results with experimental data on models of round and rectangular channels with intracanal boiling of refrigerants at positive and negative saturation temperatures is performed. It is shown that the calculation error does not exceed 10 % for a round and 20 % for a rectangular flow section. The verification results showed the possibility of using the model in the framework of engineering calculations. The proposed mathematical model can be used as the basis for the calculation programs for existing evaporators and for the creation of new types of heat exchangers with in-tube boiling of the working substance. The proposed method allows optimizing both geometric and thermal-hydrodynamic parameters.
Computer modeling is one of the most common approaches to the analysis of thin-walled shell structures stress-strain state analysis. It requires considerable time costs and high-performance hardware, especially when it is necessary to conduct a comparative analysis of various shell configurations. In this paper, we propose the use of deep learning methods to improve performance of this process. The purpose of work is to develop methods for high-performance computer simulation of thin-walled shell structures using deep neural networks, allowing to take into account geometric and physical properties of the structure as well as the load applied to it. A training approach and deep neural network architecture were developed to perform computer modeling of the stress-strain state of the shell. To form a training dataset, a computational experiment was carried out to simulate 3904 different configurations of doubly curved shallow shells that differ in linear dimensions, curvature radii, and materials used. Based on this dataset, 30 deep neural networks with different architectures were trained. To determine the optimal architecture in terms of modeling accuracy, mean absolute percentage error with clipping near-zero samples was calculated for each of the neural networks based on the test dataset. A network has been developed that allows calculating the stress-strain state of different shell configurations under an arbitrary uniformly distributed load. This is the first solution in the field of shell neural network modeling that allows us to vary the applied load, geometric and physical parameters of the shell and obtain calculation results at an arbitrary point of its middle surface. Performance measurements were carried out which show that the developed neural network allows simulating the stress-strain state of a shell structure 2117 times faster compared to the duration of solving the same problem by classical simulation. The modeling error using the network is at an acceptable level. An original architecture of a neural network for modeling the stress-strain state of shells was proposed which, through minor modifications, can be adapted for high-performance modeling of other building structure types. In accordance with the described architecture, a deep neural network was trained which reduces the computation time by several orders of magnitude. The results obtained are of high practical importance for researchers in the field of thin-walled shells modeling since they allow us to significantly reduce the time costs associated with conducting computational experiments. One of the possible applications for developed solution is prototyping of various shell configurations. Once prototyping is complete, the most efficient shell configurations can be explored in detail using classical computer simulation techniques.


The paper presents a new view on the quality assurance of state machine programs. Instead of the term “verification of state machine programs”, it is proposed to use the terms “verification of state machine models” and “validation of state machine specifications”. The first of which is applicable in the presence of a formal specification, and the second — in its absence, which is more typical for practice. This allows a more meaningful approach to understanding how to ensure the quality of state machine programs.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.