Summaries of the Issue


Subject of Research. Traditionally, afocal compensators, located in parallel or in converging beams, are used to correct the aberrations of mirror systems. The additional property of afocality gives the possibility to ignore practically the selection of materials in the design, since, in this case, the achromatic correction is achieved automatically. A change in the ambient temperature leads to a change in the shape of the mirrors and their mutual arrangement, in addition, the optical characteristics of the compensator material change that leads to defocusing. Main Results. Based on the analysis of paraxial formulas valid for the correction of chromatic aberrations and thermal defocusing, we have obtained formulas that enable to evaluate the characteristics of the materials necessary for passive athermalization, that is, to keep image quality when the ambient temperature changes without using a mechanical offset of the sensor. It is shown that in a two-lens compensator used to correct aberrations of two-mirror lens in a converging beam of rays, it is necessary to use a combination of optical glasses and polymer materials for passive athermalization. Practical Relevance. Based on the theoretical correlations obtained, a two-mirror system with an afocal compensator is calculated with a high image quality maintained over a wide temperature range. The use of obtained formulas in practice made it possible to demonstrate the possibility of creation of athermalized catadioptric lens involving combinations of conventional glasses with modern polymeric materials. The method presented is not universal, but it gives the possibility to select materials for calculating the afocal two-lens systems, compensating the thermal defocusing of the image without active correction methods (mechanical shifts).
ATMOSPHERE PRESSURE EFFECT ON THE FIBER OPTIC GYROSCOPE OUTPUT SIGNAL Sharkov Ilya A., Vinogradov Andrey V, Kozlov Vitaly N., Vladimir E. Strigalev, Kikilich Nikita E
The paper describes research results of the atmospheric pressure effect on the output signal of a fiber optic gyroscope (FOG). In the course of experiments, FOG was placed into a hermetic chamber. The atmosphere pressure was varying in the range from 0.8 to 1.5 atm. All the data, including the FOG output signal, temperature, and data from the pressure sensor installed inside the FOG, were synchronously registered with the computer software. The separation of scale factor change from zero offset in the experiment was carried out by setting the sensitive FOG axis at 0°, 90° and 270° relative to the East (the FOG was set perpendicular to the horizon). After the data processing it was concluded that the FOG signal error associated with the pressure affects mainly on the additive component. The pressure effect on the multiplicative component appeared to be negligible at rotational velocities used in the experiment (0 - 130/h). At the same time, the FOG signal has a high linear correlation coefficient with the derivative of pressure over time (in some cases, more than 0.9). The experiment was repeated several times and the high degree of the drift repeatability was shown. That makes it possible to implement the compensation algorithm. Application of the simplest algorithmic compensation based on the polynomial of the first degree (ax + b) enabled to reduce the root-mean-square (RMS) and drift of the signal by 2-9 times.
CONTROL OF SCATTERING IN OPTICAL FIBER BY FIBER TWIST Vereyutina Ksenia D, Kon`kova Elena Petrovna, Panyukov Andrey A, Shangareev Rinat H., Shmakov Genadiy S, Yudin Vladislav A
Subject of Study. The paper deals with possibility of interference pattern control for light scattering in an optical fiber by variation of its space geometry. This paper considers the optical vortices propagating in a quartz fiber with periodic inhomogeneous inserts. Method.The experimental installation includes: an optical fiber, a laser, collecting lens and LiNbO3-modulator with voltage varying across its crystal according to predetermined spacing. The radiation was recorded with Nikon COOLPIX S32camera. We showed the possibility of distribution changing for light scattering intensity at its output by changing of piling geometry on the example of radial, triangular and random piling of a fiber. Main Results. We studied experimentally the evolution of the interference pattern for light interacting with optical inhomogeneities of the optical fiber in case of the light polarization alteration in proximity to the input end of a fiber. With the use of the frame analysis the time representations were obtained for the interference patterns in case of the radial, triangular and random piling of an optical fiber. The dependence of pattern formation time on the piling parameters was calculated. Regular optical fields and speckles were registered within this work. The main properties of those interference patterns were determined. It was found that for all polarizations the main properties of those interference patterns are preserved during the propagation of the laser beam in a twist fiber. Optical vortices were identified. The main circumstance of  this phenomenon identification was the light circulation.Practical Relevance.The obtained results can find application in optical telecommunication elements and in the fabrication of optical sensors.


ALGORITHM FOR MOBILE ROBOT CROSS COUNTRY MOTION Evstigneev Maksim I., Litvinov Yury V. , Mazulina Veronika V., Chashchina Maria M
We have proposed control algorithm for a wheeled robot cross country movement along a given route. We have carried out functionality test of the proposed algorithms with the use of mathematical modeling and experimental studies on a wheeled platform of "Odyssey" company and the control unit on the basis of Arduino UNO. The robot operates in autonomous mode. Environmental analysis is carried out by means of ultrasonic sensors, gyroscope, vision systems and GPS module. Data received from the sensors is used by robot control unit to calculate the trajectories for obstacle avoidance and return to the set route, as well as for correction of local maps. The robot intelligence lies in the ability to determine the nature of its actions depending on the environment changes. Experimental study based on "Оdyssey" platform has confirmed the correctness of the chosen approach.


The paper deals with the features of induction control existing methods and facilities for the magnetic susceptibility of the medium. We draw conclusion that these facilities have a common disadvantage associated with low measurement accuracy. The ways to improve their sensitivity and measuring accuracy of the controlled parameters through inductive measuring transducers are revealed. Algorithm of the resonance control implementation for the magnetic susceptibility is developed. Its characteristic feature is the usage of specific distance value from the measuring probe to the controlled medium in the calculations, obtained by an ultrasonic distance sensor in order to reduce the influence of unevenness of the ore-bearing rock on the measuring accuracy of its magnetic properties. For the determination of the scaling factors, an appropriate calibration algorithm has been developed. It is shown that the application of the proposed algorithm makes it possible to increase the sensitivity of the facilities for the magnetite ores operative control by automatic signal processing and to provide a measurement error less than 1.5% in the extended range of distances from the probe to the medium in question, which is approximately 10 times larger than the measuring range of similar devices.


Subject of Research. The paper deals with analysis of the EA+RL method inefficiency reasons on XdivK optimization problem with switching auxiliary objectives. It is proposed to modify the EA+RL method. The EA+RL method increases efficiency of an evolutionary algorithm by introducing auxiliary objectives. The  XdivK problem is characterized by a large number of local optima. Switching objectives help to escape from local optima on some stages of optimization while being obstructive on the other stages. Method. To perform theoretical analysis of the EA+RL method and its proposed modification, the corresponding optimization process was modeled by Markov chains. The number of fitness function evaluations needed to reach the optimum was estimated based on the analysis of transition probabilities. Main Results. The EA+RL method and its proposed modification were theoretically analyzed on the XdivK problem with switching auxiliary objectives. It was proved that the proposed modification ignores obstructive objectives contrary to the EA+RL method. The lower and upper bounds on the running time of the proposed modification were obtained. Practical Relevance. The proposed modification increases the efficiency of EA+RL method, successfully used to solve NP-hard optimization problems, such as the test case generation problem.
Subject of Research. The paper deals with the problem of a "usual" (not fast) and fast restoration of color smeared images based on solving the Fredholm integral equation of the first kind (ill-posed problem). Method. The equation is solved by the quadrature method with Tikhonov’s regularization. Two methods for processing of color images are considered: methods of component-wise and vector processing. Main Results. If a model image is processed and an algorithm is usual (not fast), then the regularization parameter ais chosen from the condition of restoration error minimum. If a real image is processed and an algorithm is fast, then to choose aand the value of smear Δ, we propose the fast method of "prepared matrix", realized within 1 second. But if a real image is processed and an algorithm is not fast, then we propose the method for estimating Δ (and the smear angle θ), based on the spectrum of smeared image, and ais selected by known methods. Practical Relevance. The presented algorithms can be used to restore color smeared images, e.g., images of fast moving objects (a car, an airplane) by mathematical and computer processing of smeared (and noisy) images. 
Subject of Research. We consider a method of image quality enhancement, when an image is registered with long exposure time under uncontrollable camera shake. Blur compensation is implemented by deconvolution computational algorithm with the point-spread function determining the image blur. Method. The main step of the deblurring algorithm consists in evaluation of a point-spread function involving the second underexposed noisy image frame as an initial approximation in iterative algorithm of point-spread function estimation. The subsequent blurred image deconvolution with the estimated point-spread function provides obtaining of enhanced image. Main Results. We have proposed new procedures for refinement of point-spread function estimates based on separation of values under hysteresis threshold application with the use of adopted Canny algorithm as well as modification in scale space. Practical Relevance. Obtained results can be used to enhance image quality in cases when camera moves during the image capture process including scientific research and computer vision systems.
Subject of Research.The paper considers the method for security analysis of information systems. The method enables to evaluate the security state of information system under research in terms of the presence of unpatched vulnerabilities, which could be exploited with the public instruments. The proposed method allows for the state analysis of information system under research with no need to compose any formal specifications. The validation is carried out upon the live system in automatic mode, and system reaction to the attacking influences, performed with the Metasploit penetration testing platform, is observed. Method. The attack tree for the system under research is being constructed on the basis of the input data matching. The tree traversal follows. This provides the possibility of multi-stage attack validation. The decrease of total security analysis time period is achieved due to marking the constructed tree with probabilities of its nodes successful triggering and probability accounting during tree traversal. This probabilistic elaboration is performed with the help of radial-basis artificial neural network. Reliability of performed analysis is provided with the actual validation of presumptive vulnerabilities during tree traversal. Main Results. The program system is implemented on the basis of the proposed method. The experiments on the processing rate and effectiveness are carried out. During the experiment the security state of information systems from the set was analyzed with the help of developed program and its analog. The developed system transcends the analog from 1.5 to 6 rate by the introduced quantitative index of effectiveness. This fact proves the efficiency of proposed method. Practical Relevance. Organizations and security analysts could apply the program system, implemented on the basis of proposed method, as the standalone penetration testing and security analysis instrument.
SECURITY MODEL OF MOBILE MULTI-AGENT ROBOTIC SYSTEMS WITH COLLECTIVE MANAGEMENT Igor A. Zikratov, Viksnin Ilya I. , Zikratova Tatyana Viktorovna, Shlykov Andrey A, Medvedkov Dmitriy I.
The paper deals with creation problem of protection mechanisms for multi-agent robotic systems from attacks by introduced robots-saboteurs. We considered a class of so-called "soft" attacks that involve intercepting of communications, formation and transmission of misinformation to robots group, as well as performing other actions that do not have identified signs of robots-saboteurs invasion. We proposed theoretical security model for multi-agent robotic systems, based on zone security model and model of police stations for distributed computing systems. The basic idea of the proposed subject-object model of access control, is that a logically self-contained entity, the police station, is introduced in information system, in addition to the entities “subject and object”. In accordance with the concept of security monitoring of appeals, it performs the functions of access legitimacy checking and/or integrity of the transactions spatially distributed within a region of subjects and objects. Thus, initially homogeneous multi-agent system is proposed to be designed as heterogeneous, where there are not only agents-executors, but also agents, intended solely for the decision of security problems: identification and authentication, access control, generation, and key distribution and location analysis of agents’ position. For the latter problem solution, the region is divided into several zones with introducing of zonal and interzonal security procedures.  The performance of the model is illustrated by an example of its usage in the protection mechanism creation for classical iterative task of robot forces distribution for several purposes. We show the order of agents’ interaction with the police stations of their zone, as well as implementation of interzonal security policy.
Subject of Research.The paper reviews the problem of anomaly detection in home automation systems. The authors define present security networks specificities and highlight the need of informational and physical impact detection on sensors aimed at information security. Method. Artificial neural network is proposed for anomaly detection. This method processes the data on characteristics of security network devices for anomalous behavior detection. The artificial neural network should be preliminarily trained on the data of that type. The implementation tools for the proposed method of anomaly detection are described. Main Results. The scenario has been created for the experiment so that the model of «Smart home» system produces the data of network information streams and the artificial neural network makes decisions based on this data. As a result, the training and testing sets have been created. The anomaly has been considered to be a state with the artificial neural network result less than 0.9. Based on the test results the artificial neural network determines the network node state with 91% precision. Practical Relevance. The proposed method can be used in information and security systems when connected devices should be monitored. Anomaly detection technology excludes inconspicuous violation of information confidentiality and integrity.
EFFECTIVENESS OF STEGANALYSIS BASED ON MACHINE LEARNING METHODS Sivachev Aleksey V., Prokhozhev Nikolai N, Mikhailichenko Olga V, Bashmakov Daniil A.
Subject of Study.The paper presents comparative accuracy estimation of modern machine learning-based steganalytic methods. The paper deals with the most perspective methods in tasks of the passive counteraction to the information transfer channels using the discrete wavelet domain of static digital images. Methods. We have studied methods proposed by Gireesh Kumar, Hany Farid, Changxin Liu, Yun Q. Shi and the SPAM method. Basically the methods apply statistical moments obtained from wavelet bands LL, HL, LH and HH, as well as additional image features forming a support vector. BOWS2 image collection was used to estimate the effectiveness of methods. Steganographic impact was modeled by changing the least significant bits of coefficients for the each DWT band (LL, LH, HL and HH) with 5% and 20% of payload. The effectiveness of explored methods is estimated in view of obtained true positive, true negative, false positive and false negative image classification values. Main Results. The study has shown that all explored methods except for SPAM are effective in the task of detecting of embedding in HH band. As for the detection of the embedding in LH band, Yun Q. Shi is the most effective algorithm. In the task of the detecting in HL band, all explored methods except for SPAM have appeared to be comparatively effective under condition of big payload. When detecting the embedding in LL band, all methods have shown the effectiveness about 50% regardless the payload rate. It is established that the considered methods are not able to render effective counteraction to the hidden data channel, using the LH and HL region due to the fact that they use Haar wavelet transform. It is concluded that the application of the optimal wavelet transform makes it possible to reduce the intersection area of value histograms of the first statistical moment for the original images and steganoimages. Practical Relevance. The work results are useful to the specialists in the field of information security in the tasks of detection and combating the hidden data channels. The obtained results can be used in the development of steganalysis systems and improved methods of steganalysis as well.
The paper presents post-incident internal audit procedure of computer equipment. It enables to study computer incidents in various computer equipment (including several ones simultaneously) in the conditions of a constant increasing number of computer incidents, the volume of stored and processed information. Information about computer incidents is obtained by analyzing data in volatile and non-volatile memory, and network traffic. The problem is solved by analyzing the attributes and their values obtained from the post-incident computer equipment and resources. The technique of complex internal data audit is presented. This approach (analysis of attributes and their values) reduces the time costs. This technique includes data processing, description of the interrelationships, the usage of intelligent methods and algorithms. The descriptions of these elements, their notations and functional purposes are presented. Calculation of the proposed technique computational complexity is given. The technique can be used to examine computer incidents. It reduces time costs for study, improves accuracy and increases information content of the post-incident internal audit of computer equipment. The proposed solutions can be used to develop proactive protection systems against computer incidents.
The paper deals with application possibility of visual odometry algorithm for sparse three-dimensional reconstruction and earth surface mapping. Photography is taken by camera mounted on unmanned aerial vehicle during its flying along the specified trajectory. The sparse three-dimensional reconstruction and mapping are based on ability of visual odometry algorithm that retrieves information about geometry of specially selected landmarks on the basis of data received from inertial navigation system, and information retrieved from earth surface photographs. Simultaneously with earth surface reconstruction we define more precisely spatial position and orientation of aircraft that is important for acquisition of qualitative earth surface reconstruction with high resolution by means of stereophotogrammetry methods or by means of points clouds alignment methods in case of laser scanner usage. We have also proposed a method for quality improvement of visual odometry algorithm for precision increase of aircraft spatial position and orientation estimation, and also for earth surface reconstruction quality improvement. For visual odometry algorithm quality improvement we have proposed an original algorithm for detection of earth surface landmarks. Proposed modified visual odometry algorithm can find wide application for different autonomous vehicle navigation, and also as a part of informational system proposed for the Earth remote sensing data processing.
The paper deals with the fast motion estimation algorithms for interframe encoding in the video data H.265 / HEVC standard. A new adaptive algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. The algorithm is called fast test zone search algorithm and includes the traditional test zone search algorithm (TZS) and the hierarchical search MP (Hierarchical Search or Mean Pyramid). The considered and proposed motion estimation algorithms have been tested in several video sequences using Microsoft Visual Studio software.  The terms for algorithms evaluating were: the video sequence quality criterion (by PSNR), bitrate and encoding time. The proposed method showed that it works about 4 times faster. The average loss of the RD curve value (PSNR versus bitrate) is up to 4% in all. Application of this algorithm in modern codec H.265/HEVC instead of the standard one can significantly reduce compression time, and can be recommended for further study of the JCT-VC (Joint Collaborative Team on Video Coding).
The paper presents an algorithm for similarity estimation of hierarchical data based on the pq-gram distance calculation. The dependence of the algorithm sensitivity on the selected parameters p andq is analyzed. We  show how much the result of the algorithm will change at comparing of two trees that have difference in one random node when one of the nodes of the source tree is deleted, renamed, or an extra node is added. It is demonstrated that such analysis enables to select the parameters p and q in relation with the solving problem. The problem of a tree preliminary evaluation is substantiated - an approximate analysis of the initial level of node differences in the selected pq-grams of the compared trees. The basic terms and definitions relating to the  tree-based data structuring algorithms are described. Examples of the algorithm practical application and the details of its implementation on a real problem are shown
Subject of Study. We present a method for generating instances of the binary classification task by (according to, based on) their characteristic descriptions in the form of a meta-feature vector. We propose a naïve method for the same problem solution to be used as a referral one. We study the characteristic space of the binary classification task instances, as well as the methods for this space traversal. Method. The proposed method is based on genetic algorithm, where the distance in the characteristic space from the description vector of the generated instance for the binary classification task to the specified one is used as the minimized objective function. We developed the crossover and mutation operators for the genetic algorithm. These operators are based on such transformations as addition or removal of features and objects from datasets. Main Results. In order to validate the proposed method, we chose several non-trivial two-dimensional meta-feature spaces that were generated from statistical, information-theoretical and structural characteristics of classification task instance. We used the baseline method to evaluate the relative error of the proposed method. Both methods used the same number of classification tasks instances. The proposed method outperformed the naïve method and reduced average error by 30 times. Practical Relevance. The proposed method for generating instances for classification task based on their characteristic description allows obtaining unknown instances that are required to evaluate the performance of classifiers in certain areas of the meta-features space for design of automatic algorithm selection systems
Subject of Research.The paper deals with the problems of digital remote control of continuous technical object, followed by the possibility of system intervality of this plant discrete model representation. The specified system intervality is generated by a channel area, operating in the error detection mode. System intervality consists of such system parameter intervality as discreteness interval. The exchange of information between the control plant and a digital remote control occurs with this discreteness interval. It is shown that the causal factor of system intervality is a retransmission procedure of the code parcels in the case of their distortion detection. Method. The quantitative assessment of relative intervality of such system parameter as discreteness interval is a virtual translation of used noise-immune codes from error detection mode into error correction mode; multiplicity of corrected errors is equal to the multiplicity of detectable ones. The method is based on the generic C. Shannon's position about the dependence of information transmission speed on the characteristics of the noise environment in the communication channel for specified information reliability, characterized by an acceptable probability of false acceptance. Main Results. We have shown that the need is eliminated to enter quantitative control hardware of repetitions of transmissions of code packages into the system of digital remote control in order to assess intervality of such system parameter as discreteness interval. We have obtained the solution of this problem analytically. Practical Relevance. The proposed method for quantitative assessment of relative intervality of such system parameter as discreteness interval can be applied to all interfaces that use CRC-technology of digital information noise-protection. 


HEAT TRANSFER IN A CAVITY WITH ROTATING DISK IN TURBULENT REGIME Konstantin N. Volkov , Bulat Pavel V, Volobuev Igor A., Pronin V.A.
Subject of Research.The paper considers turbulent flow and heat exchange in a closed axisymmetric cavity with a rotating disk, which is a model of two-way axial thrust bearing, as well as the other important elements of turbomachines, for example, blade ring labyrinth seals of axial compressor stage. Method. The flow and heat transfer characteristics are studied depending on the relative gap between the fixed housing and the rotating disc and the Reynolds number. Comparison of the local and integral flow characteristics obtained on the basis of various models of turbulence with the data of physical experiment is given. Main Results. The flow structure and heat transfer characteristics are studied depending on the relative gap between the fixed body and the rotating disc and the Reynolds number. Comparison of the local and integral characteristics of the flow with the data of the physical experiment shows that the best matching is given by the application of the k-ε model with Kato-Launder corrections for the turbulence production term and the corrections to the curvature of the streamlines, as well as the two-layer k-ε / k-1 turbulence model. The application of the Spalart-Allmares turbulence model and the Reynolds stress transfer model leads to significant errors in calculating the heat flux distribution over the stator surface. Practical Relevance. The considered problem is a model problem and it gives the possibility to make a conclusion about the applicability of various flow models and models of turbulence in such units of compressors and gas turbines as seals of the blade ring, axial and radial gas and liquid bearings, rotating heat exchangers.
Subject of Research.The paper deals with the study of a self-regulating radial gas-dynamic bearing. Methodology for its calculation and design is presented. We have developed the modeling methods for the bearing surface rotational segments stable in angle of rotation, load and rotor speed. We have also developed a numerical method for the segment position determining when the zero moments are acting on it and the method of the segment stability analyzing in this position. Main Results. A technique for determining the stable equilibrium position of a segment was described. For different values of the lubricating layer average thickness and the speed of the shaft, the values and direction of the torque on the segment and the resultant forces acting on the segment were determined. The pressure plots in the lubricating layer of the segment were obtained. Parametric dependences of the design characteristics of the bearing on the load on the segment and on the rotational speed of the shaft were defined. Practical Relevance. The developed calculation technique can be used in the design of hybrid air bearings during the selection of the segment rotation axis position. The rotation of the segments enables to extend the range of self-regulation of air bearings and, within certain limits, to parry the overloads that occur on the shaft.
Subject of Research.A nonstationary software testing model is studied. Numerical analysis methods of software testing efficiency based on this model are developed. Modeling of software testing efficiency enables to plan comprehensively the final quality, resources and time required at various project implementation stages. Methods. The technique is based on the proposed improved numerical model for software testing. The process of errors detecting is approximated by the exponential law and the process of elimination by generalized two-phase Cox distribution. The software debugging process after approximation is described by Markovian queue with a discrete set of states and continuous time. The possibility is provided to use the probability of errors detection for each module during their testing. The paper presents the modified marked graph and the system of differential equations; its numerical solution gives the possibility to calculate specific indicators for target effect of software debugging process: probability of certain system states, the time distribution function for errors detection and elimination, the mathematical expectation of random variables and the number of detected or corrected errors. The probability of operating goal achievement (testing) is used as an overall index for the integrated effectiveness evaluating of these processes (including required resources). Main Results. The developed methodology is applied for effectiveness research of the actual project. The private indicators of target effect and integrated efficiency indicator for testing are calculated. The required testing time for specified software quality indicators achievement is identified. The analysis of target effect and time influence on testing effectiveness is performed (on the probability of operation goal achieving). Practical Relevance. The suggested methodology enables to take into account the reliability assessment for each module separately. The Cox approximation removes restrictions on the usage of arbitrary time distribution for fault resolution duration. That generalizes well-known models, simplifies the initial data preparation, improves the accuracy of software test process modeling and helps to take into account the viability (power) of the tests. With these models we can search for the ways of software reliability improvement by generating tests that detect errors with high probability. This methodology gives the possibility to calculate not only the private reliability software indicators, but the integrated indicator of software testing process effectiveness and to develop practical recommendations for effective organization of these processes.   Keywords 
ON THE SIMULATION PARADIGM ANALYSIS Kutuzov Oleg I. , Tatarnikova Tatiana M.
Subject of Study.We discuss implementation features of the system time promotion in existing simulation paradigms: discrete event, dynamic, system dynamics and multi-agent approach. In the models with continuous processes the value of the promotion step in time is proposed to be chosen according to the Nyquist-Kotelnikov theorem. Methods. The assignment of the system time promotion step is based on cyclic sampling with a constant Dtor a random step. A fixed step is used for dynamic modeling and system dynamics. With discrete-event and agent modeling, both fixed and random steps of the system time promotion are used. When constructing the "mover" of the system time, two main schemes for creation of modeling algorithms are used: the scheme of events and the scheme of processes; the first scheme is used for discrete-event modeling, and the second one for multi-agent modeling. In both cases, the promotion of the system time is performed according to the principle of "special" moments. To determine the next "special" moment, a calendar is used where the closest time of this event is specified for each type of event. Main Results. We have shown the unity of four simulation modeling versions: discrete-event, dynamic, system dynamics and multi-agent. We have substantiated a formalized approach to the choice of the system time promotion step. The schemes of events and processes are compared, realizing different approaches to modeling algorithm creation. Practical Relevance. The unity of paradigms contributes to the implementation of the integrated simulation environment. Recommendations for choosing a step in the system time promotion, given in the paper, enable to speed up the process of modeling and save computing resources.


SMART LASER HEAD Fedosov Yury V , Afanasiev Maksim Ya
We consider the issues of creating a smart laser head for operation as a part of automated industrial equipment with computer numerical control. The paper deals with the main tasks solved by such devices and methods for their solution. A number of similar apparatus is examined; comparative analysis is carried out to compare performance and drawbacks. The main types of possible distortions of the contact spot for laser processing of a free-form surface are considered, as well as the main causes of their appearance and the ways of their interpretation during mathematical processing. Also, we have studied the smart head optical-mechanical circuit, designed for compensation of such distortions when working as a part of the technological equipment.
The paper deals with design method of an output robust controller known as the consecutive compensator for output stabilization of uncertain plants. An idea is based on transition from operator form of the regulator to its matrix representation that permits the usage of auxiliary tools (e.g. linear matrix inequalities). The new design method enables to develop the known result and get effective solutions for such tasks as discretization, optimization and adaptation.The paper provides simulation results to confirm efficiency of the proposed regulator. They illustrate stabilization of uncertain plant with one stable zero and two unstable poles with settling time less than the given one
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.