Summaries of the Issue


Characterization of the holographic photopolymer Bayfol HX in the IR region
Vladimir N. Borisov, Andrey D. Zverev, Vladimir A. Kamynin, Maria S. Kopyeva, Roman A. Okun, Vladimir B. Tsvetkov
The possibility of creating holographic optical elements operating in the near infrared spectral range based on the Bayfol HX holographic photopolymer has been considered. The dynamic range of the refractive index of the photopolymer and the amplitude-phase nature of the holograms in the infrared range have been studied. The influence of recording parameters (power density of recording radiation, recording time) on the distribution of the dynamic range of the refractive index between grating harmonics has been studied. The analysis of the amplitude-phase nature of holograms was carried out by measuring the transmission spectra of the studied photopolymer after the photopolymerization reaction. The dynamic range of the refractive index of a photopolymer evaluated in the spectral range from 405 nm to 2099 nm. For this purpose, the angular selectivity contours of holograms with periods from 414 nm to 2100 nm, optimized for different parts of the specified spectral range, were measured and analyzed. The influence of recording parameters on the distribution of the dynamic range of the refractive index between the grating harmonics was analyzed by calculating the amplitudes of the first and second harmonics of the refractive index modulation from the experimentally measured angular selectivity contours of holograms recorded with different recording time at a constant irradiation dose. It was shown that the dynamic range of the refractive index of the photopolymer in the near infrared spectral range, as compared with the long-wavelength part of the visible region of the spectrum, differs by a value that does not exceed the measurement accuracy. A pronounced violation of the reciprocity was demonstrated with scaling of the interference pattern or with changing of the power density of the recording radiation. The optimal recording conditions for holograms calculated for the infrared spectral range for the studied photopolymer were found. The possibility of using of the studied holographic material in telecommunication optics has been demonstrated.
Study of blood vessels reaction to local heating by imaging photoplethysmography
Anzhelika V. Belaventseva, Natalia P. Podolyan, Volynsky Maxim Alexandrovich, Valeriy V. Zaytsev , Anastasiia V. Sakovskaia, Oleg V. Mamontov, Roman V. Romashko, Alexey A. Kamshilin
The possibility of using a new contactless method of imaging photoplethysmography to assess thermoregulatory vasodilatation of blood vessels was studied. Perfusion reaction in a region of the outer forearm in response to local heating up to 41 ± 1 °C was monitored in six volunteers aged 39–52 years using a video recording of the study area, synchronized with an electrocardiogram, and subsequent correlation processing of the data obtained. It was shown that the change in perfusion during local heating has a biphasic type and is due to the response of the nervous system mediated by the axon reflex in the first phase of vasodilation and the synthesis of nitric oxide in endothelial cells in the second phase of vasodilation. It was revealed that the multiple increase in perfusion in the first phase of heating depends both on the initial temperature of the skin and on the difference in its heating temperature. It was found that for a significant development of a vascular response to hyperthermia associated with the activation of endothelial function, heating of tissues for more than 15 minutes is necessary. It was shown that the method of imaging photoplethysmography reliably reflects the work of the mechanisms of regulation of peripheral vascular resistance which is of great prognostic value for the detection of primary signs of cardiovascular diseases.
The results of the research on the possibility of transmitting holographic information over the Wi-Fi 40 MHz radio channel are presented. It is shown that the use of two main 3D image modalities for this, — a depth map of the holographic object and the texture of its surface, is sufficient to synthesize a full-fledged hologram at the receiving end of the communication channel, restoring the holographic object with continuous vertical and horizontal parallax. The method of transmitting 3D holographic information is similar to the well–known in radio engineering method of transmitting information on one sideband (Single-sideband modulation, SSB). The essential difference of the proposed method is that the spatial frequencies forming the hologram are the result of simultaneous amplitude and phase modulation of the reference signal. This complicates their theoretical analysis. Experimental confirmation of the possibility of such a transfer was performed using a free FTP client with open source FileZilla. A communication protocol has been applied to transmit information over a wireless Wi-Fi channel. It is shown that the transmitted information stream is sufficient to synthesize a hologram reconstructing 3D images at the receiving end of the communication channel. At the same time, the holographic image of a dynamically changing object with a television frame rate has continuous horizontal and vertical parallax, and the spatial resolution of the restored image was no worse than a high-definition television image of Full HD. The possibility of transmitting all the necessary information over the radio channel to reproduce a holographic 3D video stream at the receiving end of the channel with a resolution not lower than in high-definition television standards with continuous parallax has been experimentally confirmed.


Anodization parameters influence on anodic aluminum oxide formed above the silver island film
Igor Yu. Nikitin, Rezida D. Nabiullina, Alexey V. Nashchekin, Anton Starovoytov, Igor A. Gladskikh
Optical properties of hybrid plasmonic thin film structure in the form of anodic alumina porous matrix above silver island film on the quartz substrate have been investigated. Silver nanoparticle film in the bottom of the structure has been obtained by physical vapor deposition in a vacuum chamber. The silver island film with the island of average diameter of 100 nm has been formed after annealing in the air atmosphere. Above the silver nanoparticle film an aluminum film has been deposited by the E-beam evaporation. As a result of one-step straight anodization, a nanoporous alumina thin film has been formed. The obtained structures were investigated, using spectroscopy and electron microscopy methods. The structure reflectance and optical density spectra have been obtained and analyzed for different anodization times and currents. To compare the results, the reflectance and optical density spectra have been obtained for silver nanoparticles and anodic alumina. When anodization times are increased, structure reflection coefficient spectra become more like the same characteristic for anodic aluminum oxide, which can be explained by film oxidation. At the same time a red shift of reflectance spectrums is observed in the structures with bigger maximum anodization currents. This effect has been observed in other works and can be explained by the increasing distance between the pores. A numerical modeling of optical properties with the help of Mie calculator for the structure with the nanoparticle size of 100 nm has shown that the results of the modeling can be compared to the experimentally obtained optical density spectra. The modeling was performed using spherical approximation. To obtain more precise results for alumina film thickness and nanoparticle optical properties, a silver nanoparticle form factor has to be considered. The results of this work can be used in sensors, optical coatings and photon sources fabrication methods. These can be used in screens, optical schemes and many other plasmonic devices.


In complex electromechanical objects containing electric drives with induction motors, it is often difficult or impossible to install sensors of output variables. In this case, to determine the output coordinates of the motor, it is necessary to introduce state observers into the control system of the electric drive. The main problems of creating observers are the presence of noise and interference in the measuring circuits of the control system which affect the accuracy of the estimation of immeasurable state variables. The paper presents a comparison of the accuracy of estimates obtained as a result of the work of the observer algorithms based on the Kalman filter and the observer of the Luenberger in the induction electric drive system, with vector control at the noise level of the current measurement channels in the stator windings of the induction motor. To synthesize algorithms for state observers, methods of identification theory and quasi-linearization of nonlinear models of the control object under consideration were used. The simulation model of an induction motor is based on a classical vector field-oriented control system where an estimate of the angular speed of the motor shaft is used as a feedback signal. The model implements the following blocks: a mathematical model of an induction motor in a two-phase fixed coordinate system α–β; the structure of the observer algorithm; the procedure for converting the basis of the current vector and the control voltage from stationary to rotating and vice versa; proportional-integral regulators of current, flux linkage and angular speed. The S-shaped intensity setter forms a speed setting curve. The input signals for observers are the stator voltages and currents of the reference model of an induction motor. The adaptation coefficients for the Luenberger observer were selected experimentally from the condition of obtaining the average minimum value of the difference modulus of the estimated values. The covariance matrices for the observer based on the Kalman filter are configured on the basis of the experiment, ensuring a minimum of the average value of the absolute error. The time dependences of the transients of the angular speed of the shaft, the modulus of the flux linkage vectors of the rotor and stator currents are obtained. The dependencies were evaluated when starting an induction motor with nominal values and values of frequency and voltage amounting to 10 % of the nominal values. The work of estimation algorithms in the presence of a noise component, as well as when changing the parameters of the induction motor replacement circuit by ± 10 %, is investigated. The results of modeling the operation of the electric drive in starting modes with a mechanical load equal to the nominal value at a supply voltage frequency of 50 Hz and at 10 % of the nominal value for a voltage of 1 Hz are obtained. It is shown that the greatest relative estimation errors occur in the starting mode of the electric drive, and the maximum accuracy is achieved in the case of using a nonlinear Kalman filter. The results of the work can be used in the development of automatic control systems for sensorless electric drives and frequency-controlled electric drive of centrifugal pumping units for oil production.
A method of optimizing the structure of hierarchical distributed control systems
Andrey Yu. Onufrey, Alexander V. Razumov, Vitaliy V. Kakaev
A description of the control system represented in the form of an oriented graph and a method for formalization of the problem of choosing a variant of the control system structure are proposed. The research results of hierarchical distributed control systems based on analytical and statistical models for evaluating performance indicators and optimizing their structure are presented. A model and a method for optimization control systems with a hierarchical structure have been developed which makes it possible to optimize a control system with an arbitrary structure based on the synthesis of a reference variant of a homogeneous hierarchical structure. In comparison with the known methods, the assumption of uniformity of the structure of the control system makes it possible to use an analytical solution when choosing a reference plan in accordance with the proposed criterion in the vector space “efficiency-cost”. The proposed optimization method is based on proofing the possibility to analytically investigate homogenous hierarchical distributed control systems and on establishing the dependency of the cost and time of task solving in the control system on the structure parameters. The method for optimizing the structure in hierarchical distributed control systems consists of two main stages. At the first stage, within the framework of the analytical model, the direct optimization problem is solved by minimizing the processing time with a cost constraint and the inverse optimization problem is solved by minimizing the cost with a processing time constraint. The result is the choice of the best reference plan for a homogeneous control system structure. At the second stage, based on simulation modeling, the problem of determining the “critical” areas (control points) that limit the effectiveness of the control system is solved. The found “critical” areas are subject to improvement by changing their structure and introducing new technical solutions that provide the specified performance indicators of the entire management system. A model of the hierarchical structure of the management system is given. The procedure for selecting the reference variant and the algorithm for modifying the structure of the control system are shown. An example of optimizing the structure of the control system is given according to the criterion of the required throughput. The example showed that the application of the proposed method allows choosing a variant of the structure of the control system that satisfies the selected criterion and the specified constraints. It is advisable to apply the proposed method at the early stages of designing distributed information control systems when choosing variants of their construction and substantiating requirements for the technical characteristics of structural elements.
A new method proposed for identifying the parameters of sinusoidal signal with unknown variable amplitude. The problem of estimating the parameters of sinusoidal signals is relevant in the problems of dynamic positioning and disturbance compensation, for the synthesis of control laws that take into account external disturbances. In the proposed method, the restriction on the signal amplitude is removed. In contrast to known approaches, where the amplitude must be fixed, in the proposed method the signal amplitude can be variable. To implement the proposed identification algorithm, the Jordan matrix form and delay operators are used. During parameterization, a regression model is formed containing unknown stationary parameters. To search for unknown parameters, the method of dynamic expansion of the regressor and mixing is used. The results of computer simulation demonstrate the efficiency of the proposed algorithm. The simulation results confirmed the convergence of parameter estimation to the true values. The proposed approach can be applied to a wide class of applied problems related to disturbance compensation in vibration protection systems, monitoring systems in determining the parameters of high-rise or large-span building structures, and in robotic object control systems.


This study concerns the issues of temperature stabilization in units used to research the properties of molecules at low and ultra-low temperatures. This research is relevant due to the need to increase the speed and accuracy of the data obtained. Using the LabView graphical programming environment tools, a control program was created for the LakeShore 325 thermocontroller which reacts when the current temperature is close to the control point temperature set by the researcher. By adding controls for the heating element power and PID controller boot times, it is possible to use them more flexibly. The method was verified for the temperature control points of 40 K, 100 K, 150 K and 200 K. A comparison of the proposed temperature stabilization program with the standard PID controller solution demonstrates the advantages of the former. The speed of reaching the control points was doubled. The digitalization of the LakeShore 325 thermocontroller makes it possible to work further on improving temperature stabilization. The resulting increase in the accuracy–time stabilization ratio makes it possible for those who conduct low-temperature experiments to improve the quality of their measurements dramatically. The introduction of a digital version of the temperature control device opens up possibilities for further automation of cryovacuum units by linking the thermal control program with other programs, for example, recording the spectra at specific temperature values.
Investigation on impact and wear behavior of Al6061 (SiC+Al2O3) and Al7075 (SiC+Al2O3) hybrid composites 
Rathinavelu Ravichandaran, Saminathan Selvarasu, Senthilkumar Gopal, Ravisankar Ramachandran
The current study focuses on the properties of dry sliding characteristics and impact strength of two different aluminum alloys that were reinforced with 100 nm sized Silicon carbide (SiC) and Aluminum oxide (Al2O3) ceramic particles, for improving the mechanical properties of the final alloy with the mixing materials characteristics. Stir casting method is adopted for fabricating the composites, matrix being Al6061 and Al7075, utilizing three distinct reinforcement ratios. In order to improve the mechanical properties and increase resistance to wear, tear, and shear, SiC and Al2O3 are utilized as reinforcing elements. Following the creation of the composite matrices, their physical and mechanical behaviors are examined in accordance with ASTM standards, and a comparison between the hybrid composites made of Al6061 and Al7075 is then completed. Comparison of the obtained samples showed that the Al7075 (12 % SiC + 6 % Al2O3) alloy exhibits characteristics with exceptional tribological and mechanical characteristics. The studied alloy can be used in the automotive industry, for example, in the production of pistons, connecting rods, due to the minimum degree of wear and variable thermal expansion coefficient.


The article considers the computational methods and features of the construction of a complex functional block for the implementation of the discrete wavelet transform (DWT) Dobeshie 9/7 in digital image signal processing systems based on FPGA. We proposed a mathematical model and algorithms for the implementation of parallel and series-convector methods of signal processing to calculate the coefficients of a discrete bi-orthogonal Dobeshie wavelet 9/7 taking into account the architecture of used FPGA. The model is based on wavelet transform factorization methods using lifting schemes. In contrast to conventional lifting schemes, the proposed method and algorithms can increase the speed of FPGA calculations with simplified hardware implementation. CAD Quartus II and ModelSim are used as a development environment. The behavioral model is written in Verilog HDL. Altera Cyclone® IV 4CE115 was used as FPGA. On the basis of the obtained behavioral model the testing module was developed and the simulation of digital circuit in the ModelSim environment was carried out. The formula for estimating the number of clock cycles of the forward and reverse DWT has been proposed; on its basis the estimate of the number of parallel computations depending on the number of input elements and the characteristics of the FPGA was obtained. As a result of experiments the dependences of the number of cycles for DWT computation depending on the size of the side of a square image with different variants of the number of parallel processing blocks were obtained. It is shown that parallel work of several independent modules gives a possibility to conduct concurrent processing of several input columns (rows) from input 2D array, and unification of the multiplier-summing module allows to increase efficiency of calculations and to reduce volume of occupied hardware resources. Conveyor based DWT structure is characterized by less hardware costs in terms of implementation of the calculator unit and memory allocation. As a result of testing the digital circuit, it was found that the developed block structure can significantly increase the DWT speed as well as reduce the cost of the system on a chip. The proposed realization of the block of two-dimensional forward and reverse wavelet transform for the Dobeshi 9/7 filter bank forms a complete module and can be used as a ready-made complex functional block for further development of high quality image transmission systems in real time.
This paper describes an approach for constructing a task-oriented dialog system (a conversational agent) with an unstructured knowledge access based on spoken conversations including: written speech augmentation that simulates the speech recognition results; combination of classifiers; retrieval augmented text generation. The proposed approach provides the training data augmentation in two ways: by converting the original texts into sound waves by a text-to-speech model and then transforming back into texts by an automated speech recognition model; injecting artificially generated errors based on phonetic similarity. A dialogue system with access to the unstructured knowledge base solves the task of detecting a turn, which requires searching for additional information in an unstructured knowledge base. For this purpose, the Support Vector Machine, Convolutional Neural Network, Bidirectional Encoder Representations from Transformers, and Generative Pre-trained Transformer 2 models were trained. The best of the presented models are used in the weighted combination. Next, a suitable text fragment is selected from the knowledge base and a reasonable answer is generated. The tasks are solved by adapting the retrieval augmented text generation model Retrieval Augmented Generation. The proposed method tested on the data from the 10th Dialogue System Technology Challenge. In all metrics, except Precision, the new approach significantly outperformed the results of the basic models proposed by the organizers of the competition. The results of the work can be used to create chat-bot systems that provide automatic processing of user requests in natural language based on an unstructured knowledge access, such as a database of answers to frequently asked questions.


In this article, an approach to modeling dynamical systems in case of unknown governing physical laws has been introduced. The systems of differential equations obtained by means of a data-driven algorithm are taken as the desired models. In this case, the problem of predicting the state of the process is solved by integrating the resulting differential equations. In contrast to classical data-driven approaches to dynamical systems representation, based on the general machine learning methods, the proposed approach is based on the principles, comparable to the analytical equation-based modeling. Models in forms of systems of differential equations, composed as combinations of elementary functions and operation with the structure, were determined by adapted multi-objective evolutionary optimization algorithm. Time-series describing the state of each element of the dynamic system are used as input data for the algorithm. To ensure the correct operation of the algorithm on data characterizing real-world processes, noise reduction mechanisms are introduced in the algorithm. The use of multicriteria optimization, held in the space of complexity and quality criteria for individual equations of the differential equation system, makes it possible to improve the diversity of proposed candidate solutions and, therefore, to improve the convergence of the algorithm to a model that best represents the dynamics of the process. The output of the algorithm is a set of Pareto-optimal solutions of the optimization problem where each individual of the set corresponds to one system of differential equations. In the course of the work, a library of data-driven modeling of dynamic systems based on differential equation systems was created. The behavior of the algorithm was studied on a synthetic validation dataset describing the state of the hunter-prey dynamic system given by the Lotka-Volterra equations. Finally, a toolset based on the solution of the generated equations was integrated into the algorithm for predicting future system states. The method is applicable to data-driven modeling of arbitrary dynamical systems (e.g. hydrometeorological systems) in cases where the processes can be described using differential equations. Models generated by the algorithm can be used as components of more complex composite models, or in an ensemble of methods as an interpretable component.
Assessing the time series predictability is necessary for forecasting models validating, for classifying series to optimize the choice of the model and its parameters, and for analyzing the results. The difficulties in assessing predictability occur due to large heteroscedasticity of errors obtained when predicting several series of different nature and characteristics. In this work, the internal predictability of predictive modeling objects is investigated. Using the example of time series forecasting, we explore the possibility of quantifying internal predictability in terms of the probability (frequency) of obtaining a forecast with an error greater than some certain level. We also try to determine the relationship of such a measure with the characteristics of the time series themselves. The idea of the proposed method is to estimate the internal predictability by the probability of an error exceeding a predetermined threshold value. The studies were carried out on data from open sources containing more than seven thousand time series of stock market prices. We compare the probability of errors which exceed the allowable value (miss probabilities) for the same series on different forecasting models. We show that these probabilities differ insignificantly for different forecasting models with the same series, and hence, the probability can be a measure of predictability. We also show the relationship of the miss probability values with entropy, the Hurst exponent, and other characteristics of the series according to which the predictability can be estimated. It has been established that the resulting measure makes it possible to compare the predictability of time series with pronounced heteroscedasticity of forecast errors and when using different models. The measure is related to the characteristics of the time series and is interpretable. The results can be generalized to any objects of predictive modeling and forecasting quality scores. It can be useful to developers of predictive modeling algorithms, machine learning specialists in solving practical problems of forecasting.
Software framework for hyperparameters optimization of models with additive regularization
Maria A. Khodorchenko, Nikolay A. Butakov, Nasonov Denis A, Mikhail Yu. Firulik
The processing of unstructured data, such as natural language texts, is one of the urgent tasks in the development of intelligent products. In turn, topic modeling as a method of working with unmarked and partially marked text data is a natural choice for analyzing document bodies and creating vector representations. In this regard, it is especially important to train high-quality thematic models in a short time which is possible with the help of the proposed framework. The developed framework implements an evolutionary approach to optimizing hyperparameters of models with additive regularization and high results on quality metrics (coherence, NPMI). To reduce the computational time, a mode of working with surrogate models is presented which provides acceleration of calculations up to 1.8 times without loss of quality. The effectiveness of the framework is demonstrated on three datasets with different statistical characteristics. The results obtained exceed similar solutions by an average of 20 % in coherence and 5 % in classification quality for two of the three datasets. A distributed version of the framework has been developed for conducting experimental studies of topic models. The developed framework can be used by users without special knowledge in the field of topic modeling due to the default data processing pipeline. The results of the work can be used by researchers to analyze topic models and expand functionality.
Value-based modeling of economic decision making in conditions of unsteady environment
Valentina Yu. Guleva, Anton N. Kovantsev, Сурков А. Г., Petr V. Chunaev , Galina V. Gornova, Boukhanovsky Alexander V
The changing environment creates the conditions for changing people’s behavior which together can lead to crisis situations. In the case of making economic decisions, the emerging non-stationarity of the dynamics of different components of the system can represent an economic crisis. The possibility of universal human values consideration in modeling decision-making under conditions of changing environment has been studied. The internal processes of an agent prior to making a decision are reflected in the concept of Beliefs-Desires-Intentions-Actions-Reactions (BDIAR). The article reviews the regularities and existing approaches to modeling economic decisions, and proposes a new, author’s approach. The mechanisms of influence of stress on decision-making, factors of rationality limitation, and risk assessment in the context of behavioral economics are revealed. The well-known theories of the influence of values on the change in the structure of consumption in a crisis situation are considered. Ways of taking into account emotions in the architecture of the agent are shown. In the proposed model, values are considered as a social factor in decision making. Due to their subjectivity, they are presented mathematically as the basis for assessing environmental objects. The subjectivity of the objects of choice assessments is reflected in the functions of attractiveness of the objects, the functions of the agent’s state dynamics, and in the subjectivity of the decision influence on the satisfaction of the agent. A possible modification of the components of the agent model is shown, taking into account the influence of values on the dynamics of consumption. A method is proposed for taking into account values in the BDIAR architecture when modeling an agent’s decision-making where the levels of the architecture correspond to values, preference functions, and functions of the agent’s state dynamics. The pseudonymized transactional data on debit cards of the partner bank customers were analyzed separately for 2017–2019 and 2020. The subjectivity of the environment influence in a crisis situation on the dynamics of changes in values and needs for different consumer groups is demonstrated, taking into account the clustering of their behavior types. Differences in the dynamics of values at the individual level and the level of groups are shown; an increase in the priority of survival values and a different rate of return to the pre-crisis state for different behavioral groups are also shown. The results obtained are useful for developing methods for modeling economic behavior in a non-stationary external environment, in particular, in the case of crises.
Methodology for organizing and conducting a study to assess consumer ability
Алейников С. А., Olga O. Gofman, Oleg O. Basov
A technique is proposed for determining the characteristics of user behavior (their needs, purchasing activity and cost structure) depending on the change and catastrophization of the situation. To perform semi-natural studies (surveys in a game form) in the field of behavioral economics, a stand was created based on the information technology of digital avatar assistants. The use of the stand makes it possible to simulate real life situations during which participants need to make purchases for the next week every in-game week, taking into account their own ideas about the situation which is modeled by presenting messages to participants in the format of a mobile application news feed. After providing game information (stimulus material), participants must answer questions about their subjective attitude to the stability of the current game situation: about well-being, about interest in each specific news, about the tendency to impulsive purchases in a stressful situation, about consumer activity, about possible in-game plans for future. The use of a mobile application based on the information technology of digital assistants-avatars for the purpose of collecting and analyzing user data is proposed. A methodology for organizing and conducting a study to assess consumer ability has been developed. The methodology includes the following steps: preparation and collection of preliminary (pre-game) information about the target audience; conducting research with the provision of stimulus material (information content) which describes the situation in the environment (world, country, city, etc.); providing options for choosing further actions for the user as a consumer. The developed methodology makes it possible to identify the features of consumer behavior and the structure of consumption in the general sample of users in the context of event dynamics. It becomes possible to determine the strategy of consumer behavior and the mistakes that participants make during the game.
Automated cluster analysis of communication strategies of educational telegram channels
Boris A. Nizomutdinov , Anna B. Uglova, Богдановская И. М.
The issues of educational communication monitoring, analysis of communication strategies and tactics in the presentation of educational materials have been little studied. The study of the main topics of the content of publications on channels with different popularity ratings among users can be considered as one of the stages of developing tools for analyzing educational communication in Telegram. In this paper, the didactic design of a virtual educational channel is studied on the example of Telegram. Its communicative orientation, strategies and tactics of interaction, which are used by teachers to achieve high results of their students and increase audience engagement, are studied. Using machine learning methods based on the existing set of publications of educational Telegram channels, the text array was divided into clusters for further expert analysis and determination of approximate topics. For this purpose, the PolyAnalyst data analysis software platform developed by Megaputer Intelligence was used. The platform provides clustering of documents using the k-means method and supports the stages of the data analysis process from data loading and processing to advanced text and data analysis as well as supports the creation of custom reports. The thematic structure of the content of educational Telegram channels with high and low ratings and statistical information on the didactic content of educational resources is presented. It is shown that highly rated educational Telegram channels implement a semantic strategy for integrating educational and career routes. Educational Telegram channels with a low rating implement a communicative strategy aimed at providing highly specialized, logically disconnected reference, commercial or entertainment information. One of the signs of communicative tactics on low-rating channels are manipulative techniques that allow you to influence the opinion of the audience. These include the tactic of indirect persuasion, the tactic of actualizing the motive of financial gain, the tactic of filling information “gaps”. The results obtained can be used in the development of tools for the analysis and monitoring of educational communication on the Internet. The methodology of automated cluster analysis of communicative strategies of educational Telegram channels can be in demand by a wide range of specialists in the field of education management, content developers of educational Internet channels, marketers, teachers working in a virtual environment.


A significant part of the research on the effectiveness of various systems is devoted to the study of their functioning in a stationary mode. However, from the point of view of their practical application, it is of interest to study the functioning of such systems with varying workload intensity in transient, non-stationary modes of operation. And unlike the models for studying non-stationary systems, which are essentially based on the static values of distributions, this paper proposes a model using arbitrary probability distributions over time. The mathematical formalization of the model is based not on the application of the classical differential model in the time domain, but on the formal representation of the probabilities of the system states in the Laplace transform, i.e., in a complex way. Determining the values of the probabilities of the systems states is based on the principle of balance of “complex probabilities” which allows developing models of non-stationary queuing systems with arbitrary probability distributions of the arrival time of requests and their service, taking into account random or deterministic time delays. For the operational calculation of systems, it is proposed to use the developed application with a graphical user interface. The architecture of this application is presented in the form of a package diagram. The algorithm of the application is shown. Comparison of the application operation with programs MATLAB and MathCad for solving the problems of technical calculations was made when modeling the process of functioning of the standard unit of quantity and the robot control system. The advantages of using the developed application are given. The presented results can be applied by specialists involved in research on the effectiveness of various systems.
The contradiction in the concept of “solution uniqueness” arises when determining weight coefficients in multicriteria problems for the same initial data in order to assess the criteria importance based on existing qualitative and quantitative approaches. This leads to a decrease in the degree of confidence in the decisions made. Thus, it is required to determine the objectivity degree of the weighting coefficients used. The objectification of the weight coefficients for decision-making problems is the purpose of the study. The article proposes a combination of qualitative and quantitative approaches to determine weight coefficients with a given consistency. The weight coefficients matrix is formed (quantitative approach). This matrix is correlated with the rank matrix (qualitative approach). The optimization problem is solved to obtain a given consistency coefficient using the rank matrix. The proposed method application is demonstrated by the example of solving the problem of choosing the best alternative in multicriteria problem. The calculation of the weight coefficients is carried out using the developed software in the Python. The solution is reduced to a single-objective problem based on the maximin approach using the found weight coefficients. Thus, solving the problem with a given consistency ensures the result objectivity and increases the decision confidence. The proposed method can be used in assessing the criteria importance without the need for the participation of the decision maker.
The development of near-field magnetic systems and means of transmitting messages through media that significantly absorb the electromagnetic field is one of the topical areas of research in the field of wireless communication. These lossy media include water, soil, buildings. The attenuation of the magnetic field in conducting media increases with increasing frequency. To organize communication channels through a conductive medium such as sea water electromagnetic radiation of extremely low frequencies and ultra-low frequencies from 3 Hz to 300 Hz is used. Traditional communication methods due to electromagnetic radiation in that frequency ranges require large sizes of transmitting and receiving antennas. The near-field communication method makes possible significant reduction both the dimensions of receiving and emitting antennas and the transmitter power consumption. A significant limitation of near-field long-wave communication is the low bandwidth and small, up to tens of meters, communication range. The operating principle of the proposed communication system is based on the use of the magnetic component of an electromagnetic field. Transmitting element in proposed system is a solenoid with a magnetic core. Receiving magnetic field sensor is a magnet fixed on a torsion suspension. The magnet is combined with a mirror reflecting the laser beam. Rotation of the magnet under the action of an external magnetic field leads to a change in the angle of reflection of the laser beam from the mirror surface of the magnet. The reflected signal is recorded by a linear photodetector. The attenuation of the magnetic field during the transmission of radiation from a dielectric to a conducting medium was evaluated with the solution of Maxwell’s equations. A three-position binary phase shift keying and a modified three-position binary phase shift keying are developed and substantiated. The proposed solutions provide the opposite arrangement of signal symbols, high message information density, localization of the emitted signal energy in low-frequency region and an increase in communication range. Experiments had shown that usage of modified keying type shown an increase the communication range by 10 % with the same reliability of message delivery in comparison with three-position binary phase keying. The estimates of the weakening and attenuation of the magnetic field during propagation in layered media obtained from the simulation are confirmed by experimental measurements. The results of research could be used in solving problems of local deployment of secure near-field communication systems through media that absorb an electromagnetic field.
The problem of creating propulsive airfoils is considered. Such airfoils have a slit through which the boundary layer is sucked out. Located just behind this gap, a specially profiled section of the airfoil creates a propulsive thrust. The thrust is created due to an abrupt change in the pressure profile on the slit through which the boundary layer is sucked. In the last 15–20 years, the concept of a so-called propulsive wing with reduced or zero aerodynamic drag due to the suction of the boundary layer from its upper surface has been actively studied in the world. Such a wing makes it possible to reduce the aerodynamic drag of the aircraft by several times due to boundary layer laminarization and minimizing the velocity defect associated with viscous friction in the boundary layer, in the wake of the aircraft. The paper proposes a method for numerical modeling of airfoils for a propulsive wing constructed by solving the inverse problem of aerodynamics. The designed airfoils have a maximum construction height, an optimal combination of the lifting force coefficient Cl and the thrust coefficient CT, created by air suction from the wing surface. The developed technique correctly predicts the point of the laminar-turbulent transition, since the characteristics of the airfoils directly depend on the length of the laminar section. The layout of an aircraft built according to the scheme of a propulsive flying wing of ultra-small aspect ratio using the developed airfoils has been studied. The design of aerodynamic profiles was carried out by solving the inverse problem of aerodynamics with subsequent refinement of geometry using global optimization algorithms. Calculations were carried out using the Langtry−Menter turbulence γ-ReΘ Transition Shear Stress Transport model, in which there are relations for the intermittency criterion, makes it possible to simulate a laminar-turbulent transition. Calculations have shown that the developed airfoils make it possible to create an aircraft airframe with a maximum lift coefficient Clmax which exceeds the Clmax of a mechanized wing with a flap released during takeoff and landing. In horizontal flight, the Cl is three times larger than that of a typical wing. The wing with the developed profiles has a high propulsive efficiency due to the proximity of pressure and velocity in the thrust section of the airfoils and external flow. At the same time, the thrust surface of the propulsive wing exceeds the nozzle area or the total coverage of aircraft propellers by several times. The developed airfoils and integrated aerodynamic layout of the aircraft are well combined with the principles of building a distributed power plant, and allow you to combine immunity to increased atmospheric turbulence during vertical takeoff and landing with economical horizontal flight. Airfoils have an important advantage over traditional wing mechanization because they have no moving parts, and the increase or decrease in lift is regulated by changing the flow rate of the sucked air.
Using variable-precision feedback to improve operational speed of the current loop in GaN-inverters
Alecksey S. Anuchin , Maria A. Gulyaeva, Лашкевич А. Е., Alexandr A. Zharkov
With the advent of wide band-gap semiconductors like SiC and GaN, the frequency of pulse-width modulation has increased. In modern electric drives, the switching frequency can reach 100 kHz or more. In this case the performance of the drive is limited by the delay in the current feedback measurement. This delay can be changed by using delta-sigma modulators. This type of current sensors allows setting the measurement time. However, as the measurement time decreases, the accuracy of the feedback reduces. This paper proposes the algorithm in which the current controller uses variable-precision feedback. When the error between the reference and feedback is large, it uses faster but less accurate current feedback. When the error is small, it uses slower but accurate feedback. Changing the feedback measurement time requires changing the current controller gains. The algorithm was investigated on a virtual servo drive model. To evaluate the performance of the proposed regulator, the results were compared with standard regulators with different settings. It was proved that this approach allows increasing the speed of the current loop without loss in the transient performance. Besides, the algorithm increases the cut-off frequency in comparison with the standard slow and accurate controller.
Simulation of diffusion processes during electrothermal treatment of reaction crucibles of the Fe-Sn system
Fomin Vladislav E. , Anastasiia S. Tukmakova, Bolkunov Gennady A. , Novotelnova Anna V. , Bochkanov Fedor Yu., Karpenkov Dmitry Yu.
The diffusion processes regularity in the reaction crucibles of the iron-tin system during their electrothermal treatment was studied by the numerical simulation methods. The effect of current density and temperature on the processes of heat and mass transfer in the reaction zone has been studied. Numerical simulation was performed by the finite element method. The developed model includes mechanical, thermal, electrical and chemical processes during the electrothermal treatment of the iron-tin system in the reaction crucible, taking into account the distribution of components under various processing conditions of the reaction crucible. A comparative analysis of the calculated data on the diffusion of tin into iron under conditions of long-term exposure to high temperatures without the application of an electric voltage and when the reaction zone is heated by passing a high-density electric current is performed. A picture of the distribution of mass fractions of components depending on the type of impact is obtained. The penetration depth of the interacting components was determined and the intensity of the mass transfer processes was assessed. The regularities of heat and mass transfer in the system of iron and tin with a change of the process initial parameters are established. The model was verified by comparing the simulation results with the data of full-scale experiments on control samples. The research results can be used to predict the conditions for obtaining new functional materials.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.