Summaries of the Issue


The paper presents analysis and comparison of various methods and algorithms for restoration of the spectra fine structure smoothed by the instrumental spectrometer function and/or having the overlapping of close spectral lines. Continuous and discrete spectra are considered. Successful spectra restoration enhances mathematically the resolution of spectrometers. In the case of a continuous spectrum smoothing by the instrumental function, the problem of restoration is reduced to solving integral equations of the first kind. This problem is ill-posed (essentially unstable). Therefore, to obtain a stable solution of integral equations, the Tikhonov regularization, Wiener filtering, Kalman–Bucy and other methods are used. However, in the case of close lines overlapping in the spectrum, these methods make it possible to restore only the total spectrum, but not the profiles of each line. To separate line profiles, the desired lines are modeled by the Gaussians or Lorentzians; the total spectrum is differentiated using smoothing splines; the number and parameters of the lines are estimated from the results of differentiation. To refine the line parameters, minimization of the discrepancy functional by the coordinate descent method and for comparison by the Nelder–Mead method is performed. A comparison is also made with the Fourier-self-deconvolution method, in which the line widths are artificially reduced due to apodization (the interferogram truncation), and, as a result, the true line profiles are distorted for their resolution. In the original convolution method, the parameters of lines (peaks) are determined from convolutions of experimental spectrum with model spectrum derivatives. If a discrete spectrum is smoothed by the instrumental function, then the problem of spectrum restoration is described by a system of linear-non-linear equations (SLNE) and solved by the integral approximation algorithm that is more efficient than the Prony method, the Golub–Mullen–Hegland variable projection method, and other methods. Based on the results of the review of various mathematical methods, it is proposed to create a new complex algorithm for distorted spectra restoration, which makes it possible to remove the effect of instrumental function, noise, lines overlapping and other effects. The software in MATLAB is developed and the processing of a number of spectra is performed. The stated technique can be used to enhance the spectrometer resolution via mathematical and computer processing of spectra.


COMPUTER SIMULATION OF GAMMA-RAY DETECTOR BASED ON SCINTILLATION CRYSTALS AND SILICON PHOTOMULTIPLIERS Ilya O. Bokatyi, Romanova Galina Eduardovna, Victor M. Denisov, Alexander B. Titov, Victoria A. Ryzhova, Andrei V. Radilov
Subject of Research. The paper considers the principles of realization of the gamma-radiation detector based on a silicon photoelectron multiplier and a scintillation crystal with the use of an optical matching scheme. Method. For studying the possible variants of detector creation, computer models were developed in the ZEMAX Software environment, describing radiation propagation process of scintillation in the crystal volume in view of the main processes taking place in the scintillation detector. The model has the same optical characteristics as cesium iodide (CsI). Main Results. Quantitative parameters of the signal and radiation losses in modeled systems were obtained. The information on radiation distribution in the photodetector plane was obtained as well. The optimal sheme for detector creation from the registration effectiveness point of view was established and its geometric parameters were determined. Practical Relevance. The development of the approach gives the possibility to solve the problem of creating highly efficient and miniature scintillation detectors at the expense of a new class of photodetectors - silicon photoelectric multipliers. The results of the research will be useful in the development of scintillation gamma spectrometers and other devices with operating principles based on the methods of scintillation spectrometry and radiometry
Subject of Research.The paper describes the research results of an acoustic signal recorded by a hydrophone while effect on a liquid by microsecond pulses of laser radiation with a wavelength of 1.54 μm and different time substructure. We discuss the influence of energy and time substructure of the laser pulse on the magnitude of generated pressure drops in the liquid and removal efficiency of cataract eye lens tissues. Method. Microsecond pulses of ytterbium-erbium glass laser radiation with different peak power of the "leading" spike and equivalent energy were delivered to the volume of distilled water through an optical fiber. The acoustic signal was registered with "NP 10-1" needle hydrophone (Dapco Inc., USA). An in vitro hydroacoustic treatment of cataract human eye lens was performed. Main Results. We obtained the dependences of the amplitude of the first (thermo-optical) and the second (associated with "collapse-rebound" process of a steam-gas cavity) components of the acoustic signal on the pulse energy for laser pulses with different time substructures. It was established that with an increase in the peak power of the "leading" spike of microsecond pulse, the threshold for the appearance of the second component decreases, and the maximum amplitude of both components increases. The angular distributions of the amplitude of acoustic signal components were obtained. It was found that the first component has a pronounced maximum amplitude in a direction perpendicular to the optical axis of the fiber, whereas the angular distribution of the second component is more uniform. In the in vitro experiment, it was shown that an increase in the peak power of the "leading" spike results in a significant increase in the removed volume and removal efficiency of the human cataract eye lens. Practical Relevance. The obtained results can be used to optimize the parameters of laser radiation for processing of tissue surrounded by a liquid, for example, during laser cataract extraction.
Subject of Research.The paper presents the study of methods for control of rapid prototyping processes with the use of technical vision hardware and software system. Product monitoring is a crucial function of any manufacturing process. It becomes more important when the monitoring is performed during the the product manufacturing. Three-dimensional printing technology requires this kind of monitoring system in order to improve the visual and durable qualities of the product, optimize the material costs and the speed of manufacturing. Method. The parameters of the optical system for capturing images in the printing process are defined in theory. Optical systems are selected providing the necessary image quality. The analysis of the camera placement configuration has been carried out to match optimally the task. The analysis was based on overall dimensions of the 3D printer, its working area and free space in the printer case. The ways for solution of software part problems were analyzed. Main Results. A mathematical apparatus was developed for calculation of the optical system parameters of a technical vision complex. Different variants of optical systems were selected for efficiency verification of the hardware and software system. Different methods for development of programs and algorithms for data processing from video cameras were considered. Practical Relevance. The development of the hardware and software system that controls the rapid prototyping process has a significant benefit in expanding the possibilities of automating rapid prototyping processes. The results of the work can be useful in quality control of the product during its manufacturing, in disclosure of deviations from the virtual three-dimensional model, in development of recommendations for control commands update in order to improve the quality and increase the speed of product manufacturing.
The paper presents a solution for the problem of the point spatial coordinates  restoration in  three-dimensional basic system of coordinates by its stereo images obtained by cameras with independent  locations and orientations in space; the cameras settings can be different. In analytical photogrammetry such a problem is called direct photogrammetric intersection for the general photographing case. The solution presented in the paper is based on the approach different from that one adopted in analytical photogrammetry. Methodologically, this approach is based on a vector-matrix apparatus applied from the formulation of the problem up to the final solution. Structurally, the result presented in the paper provides the equivalence of the cameras, which equally produce an effect on the final result. Similarity transformations are not used in the solution, the result is obtained just in the base coordinate system. The number of stereo cameras can be easily increased without changing the decision algorithm. These facts distinguish the solution presented in the paper  from the direct photogrammetric intersection, in which one of the cameras (usually the left one) is the main camera and the center of the model coordinate system with a special direction of the axes is placed in its projection center. Similarity transforms should be used repeatedly in calculations to move from one coordinate system to another before the result in the base coordinate system is obtained. The method presented in the paper has greater average accuracy in the presence of errors in determining the coordinates of the corresponding points on the images than the direct photogrammetric intersection. This advantage in the accuracy of determining the spatial coordinates of points in the performed experiments occurred to be more than 20 %.
The paper considers the problem of infrared athermalized lenses design involving a restricted list of materials. We analyzed thermo-optical properties of the materials, working in long-wave infrared range, and analysis results are presented. Thermo-opticalproperties of diffractive optical elements (DOE) are analyzed. It is established that the usage of diffractive elements can become a possible solution for the problem of infrared lens athermalization under conditions of restricted list of materials. The set of equations is developed for dimensional calculation of infrared lenses with passive athermalization by DOE application. We studied the effect of a secondary spectrum on modulation transfer function of an optical system in combinations of optical materials with DOE. Research results are given. The example of infrared athermalized lens calculation with DOE is shown.
Mikheev Maksim V. , Ivan G. Deyneka, Plotnikov Mikhail Yurievich, Aleynik Artem S, Shuklin Philipp Alexandrovich
Subject of Research.The problem of synchronization in arrays of distributed fiber-optic hydroacoustic sensors is considered. It is shown that noise floor level is one of the most important factors affecting the operation of the sensors. The maximum allowable level of phase noise arising from the operation of the synchronization system is determined. The main existing methods of synchronization are considered, and their influence on phase noise level is estimated. Method. The signal resampling method was used as the approach for signal synchronization task. Mathematical modeling of that method in the MATLAB environment was performed. It was shown that the addition of samples to the studied signal leads to a significant increase in phase distortion. Main Results. The impact of the clock frequency instability at the signal skew in the absence of synchronization system is numerically estimated. In case of ± 20 ppm generator clock frequency deviation, the skew reaches one second after 7 hours of work. It is shown that when 8 samples per second are added to the synchronized signal, spectral distortions reach the order of 100 µrad/Hz1/2. A hardware synchronization method is proposed that provides the possibility to increase the synchronization accuracy without distortion of the spectral and phase characteristics of the signal. The method is realized by adjusting local clock frequency generator involving feedback signal. Practical Relevance. The paper proposes two synchronization methods that allow for application of the Ethernet interface according to the IEEE 802.3 standard aimed at the implementation of the distributed sensor system synchronization. The paper presents an analytical and experimental evaluation of phase jitter value between different channels of the measuring system. These methods can be used in other distributed systems, where there is an urgent task of synchronization of its nodes while maintaining scalability and flexibility of the entire system.


The paper considers the problem of the frequency identification for a biased sinusoidal signal in the absence of measurement noise. It is assumed that the displacement and amplitude of the sinusoidal signal are unknown functions of time. It is accepted that the frequency of the sinusoidal signal is an unknown number, and the displacement and amplitude of the sinusoidal signal can be represented as piecewise linear in the time interval. To estimate the frequency of the sinusoidal signal, an original parametrization procedure was proposed, reducing the original nonlinear equation to the form of a standard linear regression model. After a number of special transformations, the simplest equation was obtained, containing one unknown parameter (the square of the  sinusoidal signal frequency) multiplied by the known time function. To search for this parameter, we used the standard integrated algorithm of identification, which makes it possible to guarantee the robustness of estimates to external disturbances, and also to improve the quality of transients due to the tuning coefficient. The proposed frequency identification algorithm has technical attractiveness and can be used in problems of compensation or suppression of disturbances and/or measurement errors described by harmonic or polyharmonic signals, including for compensation of vertical inertial accelerations in estimating gravity anomalies at a mobile object. To illustrate the efficiency of the proposed identification algorithm, the paper presents the results of computer modeling demonstrating the achievement of the target goals.


The paper describes preparation of polyurethane composites infused with nano- and macro-sized carbonaceous fillerswith a different surface nature (with a hydrophobic surface-fullerene C60, fullerene soot, with a hydrophilic surface nano-diamonds, nano-diamond charge), with loading varying from 0.1 to 0.5 wt. % by in situ polymerization. The obtained nano-composites were measured by the method of dielectric spectroscopy to determine the nature of the influence of the surface origin and particle size on the structure and properties of the finished material. It was found that loading of fillers leads to the decrease in the process of α-relaxation activation energycompared to neat polyurethane (PU). It was revealed that the non-specific π-π interaction for nanosized fillers dominates over specific H-bonding, which can be related to the oxygen groups on the shells of nano-diamonds. The dielectric spectroscopy demonstrated that the glass transition temperature values of the nano-composites increase in comparison with neat PU, manifesting the so-called "antiplasticizating phenomenon", while composites with macro-sized filler exhibit a typical plasticizing effect for traditional fillers. The greatest value of the D parameter (fragility) corresponds to a sample with fullerene soot. The coincidence of activation energies of Maxwell-Wagner-Sillars polarization for different fillers means that the dimensions of the hard domains in the polymer have not changed.
Baranenko A.V., Pavel A. Kuznetsov, Victoria Yu. Zakharova, Alexandr P. Tsoy
This publication is devoted to creation of energy-efficient systems for cold supply and heat supply using heat energy accumulators. Thermal energy accumulation increases the efficiency of heat power systems including cooling and air conditioning systems, reduces peak power consumption and capacities of thermal installations at variable loads. It is shown that substances with phase transition (SPT) are widely used for thermal energy accumulation. They are mainly of the solid body-liquid type providing volume and mass density of heat  storing and cold energy that is 5–14 times higher in comparison with accumulating liquids. Requirements to SPT with regard to thermal energy accumulators are formulated. We have given an overview of the SPT recommended for application to which organic compounds belong (paraffins, fatty acids), hydrates of salts, eutectics (may include organic and inorganic compounds in their structure). The advantages and disadvantages of each group of substances are shown. The information on the properties of certain SPT in relation to air conditioning systems is presented. It is shown that SPT having industrial applications are hidden under trademarks. It is noted that the creation of heat energy accumulation systems made in Russia requires carrying out a fundamental and applied research complex. We have presented application examples of thermal energy accumulation using SPT in air conditioning systems. The designs of thermal energy accumulators are described, their advantages and disadvantages are noted. We have carried out the analysis of calculation methods for systems with thermal energy accumulators available in literature including solutions of Stefan problem about non-stationary heat exchange at phase transitions in relation to the thermal energy accumulation. The conclusion is drawn on the numerical method advantages for solution of this problem. The research directions are formulated which implementation will allow for developing Russian systems of heat and cold supply with heat energy storage devices.
The paper presents research results of fire-resistant composite material liquid glass-graphite microparticles. The production technology is considered for samples with necessary ratios of mass fractions of the mixture components. The method of composite material applying as a fire-retardant protective coating is chosen. Fire-resistant coatings are made by encapsulation method, and studies of adhesive capability of the produced coatings are performed. The values of limit loads that lead to the destruction of the composite material are revealed. The maximum fixed load value for the wooden surface was 1.22 MPa, that meets the requirements of regulatory documents. The strength of the adhesion bond with iron is much less and is equal to 0.2 MPa. Also, fire-resistant coatings are manufactured by the second alternative shotcrete method. The composition adjustment is performed in connection with the change of the application method for fire retardant composition. The studies of adhesion ability of these coatings are carried out. The lower boundary value of the adhesion bond for fire-resistant composite material for wood was 0.8 MPa, the strength of the adhesion bond with iron is much less and is equal to 0.1 MPa. Based on the research results, it is concluded that the composite material with the obtained characteristics can be used as a fire-retardant coating for building structures in order to increase fire resistance and reduce fire danger, as an equipment lining in the heat and power industry and metallurgical industry, as well as in equipment used in emergency situations.


Subject of Research. This paper presents an application of wavelet transformation and bent-functions in the creation of non-linear robust codes. The usage of wavelet decompositions gives the possibility to create a large number of different designs of robust codes. Method. To improve the non-linear properties of robust codes, bent-functions were used in the construction. Thereby the maximum non-linearity of functions is ensured increasing the probability of detecting an error in the data channel. Different designs of codes based on wavelet transform and bent-functions are developed. The difference of constructions consists in the usage of different grids for wavelet transformation: a grid with static values, or a grid based on an incoming information word. The existing linear and non-linear codes were analyzed, their comparison with the developed codes was performed.Main Results. The developed designs are robust codes and have higher characteristics compared to existing designs of robust codes. The maximum probability of the error masking for the developed designs is 0.46875. This result is a better one compared to the existing reliable Kerdock code and enables better protection against side-channel attacks. Practical Relevance. These code designs can be used in the tasks to ensure the security of information transmitted.
Kseniya I. Salakhutdinova , Lebedev Ilya S, Krivtsova Irina E.
Subject of Research.The paper proposes an approach to the use of gradient boosted decision trees algorithm. For this purpose, CatBoost algorithm developed by Yandex is proposed. Its implementation is aimed at the problem solution of OS Linux software identification in order to reduce the number of system vulnerabilities, which occur due to the installation of unauthorized software by automated system users. We consider an approach to the program signatures formation and further training of CatBoostClassifier classifier model. The subsequent recognition task is set for the identified programs that were not previously involved in the model training process. Method. Free CatBoost software was used for implementation of the gradient boosted decision trees algorithm. CatBoostClassifier multi-classification model was created on its basis. The use of this model allows identifying test sample elf-files.Main Results. The training parameters of the classification model are selected. An experiment is carried out to identify elf-files with the use of ten different featuresof emerging signature programs. The results obtained in the new approach are compared with the results of the previously developed method of identification based on the application of the statistical criterion of Chi-square homogeneity at the significance level p = 0.01. Practical Relevance. The results of the study can be recommended to information security specialists for data media audit. The developed approach gives the possibility to identify violations of the established security policy in the processing of confidential information.
The paper considers open computing systems, which provide the necessary growth of performance and memory by mechanical addition of new units without affecting the existing software environment. Such computing systems are based on the application of a special functionally complete element base (planner, functor, communicator) that implements parallel processing using dataflow control flows when necessary program fragments are transferred along with the data. Aimed at this, when a certain procedure is found ready for starting  (the planner is ready for all the data it needs), the corresponding part of the program is opened in the planner – the operator, which is then transferred along with the data to the free execution device – the functor. The result is always returned along the same route by which the procedure was activated. The layouts of the open systems using two units of design are considered: cells on the reduced element base, and servers assembled from these cells. A two-level distributed switching environment is used. At the cell level, it is provided by the transit properties of the planners and functors, and at the server level – by communicators that are part of the cells. Three types of cells are identified, which enable the growth of computing system functions: cells for increasing the number of gateways used to exchange with the external environment, cells for increasing control and working memory, cells for increasing performance. The recession of the concentrated switching environment allowed for these types of computing systems extensions to be performed independently and without any restrictions on their size. A three-dimensional structure of open systems is described, which can be used to build supercomputers.
Yuriy V. Kiselev, Anna I. Motienko, Oleg O. Basov, Saitov Igor A.
Subject of Research.The paper considers the distributed terminal system (DTS) as the basis for an intelligent infocommunication system, oriented to analyze the behavior of the information space users in the current situation and their comfortable service. The variants of realization for infocommunication technologies on the basis of various combinations of modalities are presented. Method. To solve the problem of DTS structure synthesis, an alternative graphical formalization is used, when various variants of system elements formation are given in the form of the alternative graph vertices, and the arcs present the nature of the interrelations between them. To solve the problem of determining the options for the formation of system nodes and their interrelations and minimizing of the instrumentationcost of the distributed terminal system nodes, it is proposed to use an algorithm based on the "branches" and "borders" scheme and local optimization. Main Results. The structural-functional model of the distributed terminal system presented in the paper is a theoretical construction necessary for the development of scientific and methodological tools for the synthesis of intelligent infocommunication systems. Practical Relevance. The implementation of intellectual infocommunication systems is a promising scientific and technical direction for the development of the national information infrastructure.
Subject of Research.Computer network simulation model with random access to channels and redundant transfer is developed and researched. Efficiency of this model application on configurations with different redundancy coefficient is defined. The efficiency of redundant transfer in computer networks based on common bus topology is studied. Method. The efficiency analysis of the redundant packet transmissions is carried out on the basis of computer network simulation modeling. The performance index is determined on the basis of the multiplicative criterion, which takes into account the error-free transmission and the average time margin relative to the maximum permissible transmission delay. Main Results. Computer network model with common bus topology is developed. This model gives the possibility to transmit packets via several channels and provides redundant transfer of data. Intensity and redundancy coefficient are changed while experiments were carried out. Simulation model of computer network with redundant transfer opportunity is developed. On the basis of obtained results in simulation experiments the domain of application efficiency is defined for redundant transmissions in networks based on random access and limited in average time of delivery. Practical Relevance. The presented results can be used in the design of high-reliable computer systems including computer systems providing real-time services.
Avdonin Ivan A., Budko Marina B, Michail Yu. Budko , Alexei V. Girik , Vladimir A. Grozov, Dmitry S. Iaroshevskii
Nowadays cyber-physical systems are widely used for many purposes. We consider the provision of information security of data channels in such systems. Cryptographic data security approach based on random sequences is commonly used to solve this task. Its reliability depends on quality of random data being used, thus truly random sequences are preferable for application. Truly random data generation is a time-consuming process and it requires entropy sources of physical nature. The goal of the paper presented is to research methods and approaches of collecting random numbers using inertial measurement unit as a part of cyber-physical system. Method. Quality assessment of a binary sequence was carried out during the research by determination of random sequence statistical characteristics.Main Results. Research results have shown up that raw data collected from onboard inertial sensors possess lack of entropy under non-disturbed conditions, therefore an additional post-processing is required. Practical Relevance. The results of the research can be used to obtain random sequences for on board cyber-physical systems equipped with inertial measurement units without the use of additional devices. It is planned to collect data from a flying unmanned aerial system in future to apply extractors and to utilize other methods in order to improve quality of a binary sequence.


Subject of Research.The paper presents the study of the scheme with customizable dissipative properties for compressible multicomponent flows in case of an interaction between a shockwave and a helium bubble. Method. We chose a two-step TVD Runge-Kutta time-marching scheme. The spatial difference operator is splitting by the physical processes at each time step using an adaptive artificial viscosity of the Christensen type and TVD-reconstruction of flows by a weighted linear combination of upwind and central approximations of convective terms with flux limiter. To suppress the oscillations at the gases interface we used an Abgrall nonconservative advection equation. Main Results. Numerical convergence in the norm L1 is shown on the example of the one-dimensional Karni and Quirk test problem. We have performed a comparison of the proposed scheme and the finite-volume WENO type method of Coralic and Colonius on the same resolution grids and for the same Courant number. The presented scheme requires significantly lower computational costs for the resolution of the shock-wave pattern and vortex formation details. Practical Relevance. The scheme with customizable dissipative properties can be recommended for practical calculations of the interaction between shockwaves and gas interfaces of different physical properties, wave interference and vortex formation.
The paper describes methods of QTL-analysis in the research of the genotype influence. QTL-analysis is a statistical method that connects phenotypic data with genotype, enables the determination of the exact localization and gene influence power. The QTL mapping idea is to phenotype observation and identification of the genome region on which the genotype is associated with the phenotype. With the help of molecular-genetic markers, molecular maps of individual chromosomes and genomes are made, genes and QTLs mapping are performed on them. Thus, genes with the greatest connectivity to phenotype were identified. The correlation between genotype and phenotype is studied across the full genome of the individual. The data provided by the Laboratory of Molecular Genetics of the innate immunity of Petrozavodsk State University were the initial data in this research. The essence of the project, carried out jointly with the laboratory, is to study the arrays of genetic information to identify and model the relationships between the genotype and the phenotype of biological organisms. The second-generation mice hybrids of lines C57BL/6 and MOLF were involved in the experiment. Genotyping and phenotyping were conducted based on sequencing data (determination of amino acid and nucleotide sequence) of matrix RNA. The practical result of the work is the identification of chains of activated genes, under the influence of which the cells of the tissues and organs under research die (apoptosis). The result of the study is a technique for analyzing the relationship between a phenotype and a genotype, through which groups of significant phenotypes were identified.
Subject of Research. A synthesis method for alphabets of orthogonal signal broadband messages is developed. An example of software implementation is given. The analysis of some particular and general properties of the synthesized mutually orthogonal signal broadband symbols and alphabets formed on their basis is carried out. Method. The method is based on the generation of pseudo-random N-dimensional vectors of their orthogonalization by the Gram-Schmidt method and subsequent transformation of the resulting orthonormal vectors into the corresponding frequency spectra according to predetermined rules. The spectra represent images of the desired orthonormal broadband signals in the frequency space. The result of the final synthesis is obtained during the inverse Fourier transform. Main Results. The paper presents an example of possible method program implementation  in MATLAB language of computer mathematics system. It is shown that the obtained alphabets of orthogonal signal broadband messages have high correlation properties and can be applied in communication systems, in which stable recognition of signal messages at low signal-to-noise ratios is necessary. Synthesis of signal symbols in the frequency domain makes it possible to monitor effectively the signal spectrum,  carry out the frequency transfer, create systems with frequency tuning, including pseudo-random modification, used to organize protected communication channels. Practical Relevance. High correlation properties of the received signal symbols provide the possibility of application in systems of multiple access with parallel use of the common frequency-time resource, realizing steganographic communication channels due to concealment of  useful signal in the radio noise, and limited power of transmitting devices.
Subject of Research.The modern models of deep training for generation of target small organic molecules are studied. The studies were carried out on two datasets of 250,000 drug-like molecular compounds from the ZINC database and 23,000 kinase molecular structures collected manually from the open accessed ChemBL database. Method.We propose the model of a deep neural network based on the concepts of adversarial learning and reinforcement learning. The model controls the molecular validity of the generated structures through the use of a recurrent seq2seq autoencoder and an external generator. The presence of an external generator gives the model flexibility in the choice of architecture, and also allows for the input conditions for the generation. Main Results. Comparative experiments have shown that the proposed model is better than its closest competitors in experiments with pre- and post-training in terms of generating valid and unique molecular structures. Additional chemical analysis of generated structures demonstrates the best quality of the introduced model in comparison with the other competitor models. Practical Relevance.The proposed model can be used by medical chemists as an intelligent assistant for development of new drugs.
Ekaterina A. Deputatova, Dmitriy S. Gnusarev, Dmitriy M. Kalikhman
Subject of Research. The paper presents research of compensation-typequartz pendulum accelerometer with digital feedback amplifier. Noise components of the accelerometer output signal are studied. Method. Basedon a series of experimental data, the noise components and errors of the studied device are analyzed in accordance with the method adopted at a number of domestic industrial enterprises in compliance with the Russian standards, and also in accordance with the Allan variation method, which corresponds to the International standards. MainResults. We have performed the level estimation of noise components using the spectral density of noise power distribution method. The problem of discrete filter creation is solved for the output signal realized in a digital feedback amplifier based on an embedded microcontroller. The filter has been selected in accordance with two quality criteria. According to the first criterion, the root-mean-square error tends to a minimum. The second one is a complex quality criterion for which the studied device is viewed as a closed automatic control system, wherein the system bandwidth is expected to tend to the required value and the control time tends to a minimum. Mathematical simulation of operation of the accelerometer with a digital feedback amplifier and a filter is performed in the MATLAB environment in order to determine the parameters that correspond to the complex quality criterion. Practical Relevance. It is shown that the useof the second order Butterworth filter makes it possible to reduce the noise component of the accelerometer output signal by approximately 2.5 times and corresponds to both quality criteria outlined in the paper.


Bekbol B. Mukhambedyarov, Dmitry V. Lukichev , Nikolay L. Polyuga
Subject of Research.The paper considers simulation model of the electro generating installation based on photovoltaic converters. It is known that photovoltaic cells have rather low conversion efficiency of energy therefore performance improving of the designed energy system can be partially reached by means of controlled intermediate converters. The main goal of this paper is model implementation of a solar power system and also comparative analysis of the different maximum power point tracking algorithms which are used to control energy system with the purpose to increase power efficiency of all system. Method. All algorithms considered in the paper are based on the search for an extremum on the volt-power characteristic of a photovoltaic converter. Implementation of the most popular methods of maximum power point tracking is considered: "Perturbation and observation" and "Increasing conductivity". An algorithm based on the theory of fuzzy logic is proposed for application aimed at the growth of photovoltaic cells efficiency as an alternative method for traditional algorithms. Main Results. The model of solar panel control system is implemented in MATLAB/Simulink. Three methods for maximum power point tracking within this photovoltaic system are considered and implemented. Comparative analysis of operation of different control algorithms is carried out for different levels of solar radiation intensity. Practical Relevance. The algorithms can be implemented in real power systems for improvement of their overall performance.
Subject of Research. When working with algebraic Bayesian networks, it is necessary to ensure their correctness in terms of the consistency of the probability estimates of their constituent elements.There are several approaches to automating the maintenance of consistency, characterized by their computational complexity (execution time). This complexity depends on the network structure and the chosen type of consistency. The time for internal consistency maintenance  in algebraic Bayesian networks with linear and stellate structure is compared with the time for consistency maintenance of a knowledge pattern covering such networks. The comparison is based on statistical estimates. Method.The essence of the method lies in reducing the number of variables and conditions in linear programming problems which solution ensures the maintenance of internal consistency. An experiment was carried out demonstrating the differences between the time of consistency maintenance for different algebraic Bayesian networks with a global structure. Main Results. An improved version of the algorithm for internal consistency maintenance is presented.Solvable linear programming problems are simplified in comparison with the previous version of the algorithm. Two theorems are formulated and proved, refining the estimates of the number of variables and conditions in the linear programming problems to be solved, as well as the number of the problems themselves. An experiment is performed, which showed that the proposed software implementation of internal consistency maintenanceis superior in working time to software implementation of the consistency maintenanceof a complete knowledge pattern. Practical Relevance. The results obtained can be applied in machine learning of algebraic Bayesian networks (including the synthesis of their global structures). The proposed method provides optimal synthesis of global network structures for which it is enough to use the maintenance of internal consistency during learning and further network processing. Owing to the method application these processes will have acceptable computational complexity.
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.