Summaries of the Issue


In the modern world, wearing masks, respirators and facial clothes is very popular. The novel coronavirus pandemic that began in 2019 has also significantly increased the applicability of masks in public places. The most affective person recognition methods are identification by face image and voice recording. However, person recognition systems are facing new challenges due to masks covering most of the subject’s face. Existence of new problems for intelligent systems determines the relevance of masked person recognition systems research, therefore the subject of the study is the systems and datasets for masked people recognition. The article discusses analysis of the main approaches to masked people identity recognition: masked face recognition, masked voice recognition and audiovisual methods. In addition, this article includes comparative analysis of images and recordings datasets required for person recognition systems. The results of the study showed that among the methods that use face images the most effective are methods based on convolutional neural networks and the mask area feature extraction. The methods of x-vector analysis showed a slight drop in efficiency which allows us to conclude that they are applicable in the tasks of recognizing the identity of a speaker in a mask. Results of this study help with formulation of requirements for perspective masked person recognition systems and determining directions for further research.


The results of the research focused on the reference and object beam intensity ratio effect on the transmittance of computer-generated and analog holograms are presented. Particular attention is paid to the hologram synthesis mode in which the intensity of the object beam exceeds the intensity of the reference beam (overmodulation mode). The study is relevant in cases where computer-generated holograms are used in extreme ultraviolet projection photolithography. Mathematical modeling of the physical processes of recording and reconstructing holograms has been performed. The characteristic size of the binary test object was 80 × 80 nm, the radiation wavelength was 13.5 nm, the hologram pixel size was 20 × 20 nm, the distance between the object and hologram planes was 20.4 μm, and the incidence angle of the plane reference wave was 14°42′. Synthesis and reconstruction of holograms were carried out in an overmodulation mode, with different beam paths in the object beam. It is shown that the computer-generated holograms, unless binarized, are always displayed and reconstructed as quantized holograms with a quantization interval depending on the parameters of the synthesis scheme. It has been established that the influence of the overmodulation mode on the quality of the reconstructed image when using computer-generated holograms will be much less than in the case of using analog holograms, but will also be determined by the dynamic range of the object beam intensity in the hologram synthesis plane. It is noted that the influence of overmodulation mode is minimal if an object beam converging at the center of the hologram is used during synthesis. The choice of the adequate quantization interval and the ratio of the intensities of the reference and object beams will ensure high quality of the reconstructed image when using computer-generated Fresnel holograms in extreme ultraviolet projection photolithography.
High-precision fiber-optic temperature sensor based on Fabry-Perot interferometer with reflective thin-film multilayer structures
Ianina D. Moor, Kirill A. Konnov, Plotnikov Mikhail Yurievich, Volkov Anton V, Varzhel Sergey V., Dmitriy A. Konnov, Vladimir E. Strigalev
An embodiment of a fiber-optic temperature sensor based on a Fabry-Perot interferometer and a scheme for interrogating an experimental sample of the sensor are proposed. The proposed solution makes it possible not to use expensive spectral measuring devices (spectrum analyzer, interrogator). The region of free dispersion and the phase sensitivity of the developed Fabry-Perot interferometer were determined in the temperature range from 20 °C to 590 °C. The accuracy of measuring the ambient temperature is calculated. The long-term stability of the measuring setup at room temperature has been evaluated. The phase shift of the Fabry-Perot interferometer with temperature change was registered. The design of the Fabry-Perot interferometer is implemented using reflective thin-film multilayer structures obtained by stage-by-stage electron-beam deposition in vacuum on polished end cleavages of an optical fiber. The interferometer interrogation method is based on the use of a vertical-cavity surface-emitting laser (VCSEL) operating in a pulsed mode. The principle of registering the phase shift of the interferometer with a change in temperature is based on the use of auxiliary modulation of laser radiation along the wavelength due to modulation (periodic change) of the duration of optical pulses. Auxiliary modulation makes it possible to obtain additional harmonic components in the interferometer signal, which are further used in homodyne demodulation to restore the interferometer phase shift signal proportional to the change in the optical path difference between the interferometer mirrors. The design of the high-temperature sensor is based on a Fabry-Perot interferometer the reflecting mirrors of which are five alternating layers of thin films of TiO2 and Al2O3. Based on the results of the temperature experiment, it was concluded that an increase in the ambient temperature leads to a decrease in the free dispersion region of the Fabry-Perot interferometer. The conclusion made is consistent with the theoretical data. According to the results of the experiment, it is shown that the phase sensitivity of the interferometer to temperature changes is 0.94 rad/K. The accuracy of temperature measurements at the 3σ level was 0.017 K. The results of the study may be of great importance in creating systems for monitoring temperatures above 300 °C. The use of such an interferometer makes it possible to carry out high-precision relative temperature measurements.
An optical system is considered that ensures the concentration of radiation from an LED emitting within a hemisphere onto a near-field illuminated area. The system is proposed to consider such a system as a composition of two zones — the central zone, which is a lens, and the zone responsible for capturing radiation from the LED within an angle of 40 to 90 degrees. Variants with a central zone in the form of bi-aspherical and sphero-elliptical lenses of finite thickness are analyzed. The alternative variant of the concentrating system composed from a collimating TIR lens and additional focusing lens is also analyzed. The expressions are given that allow analyzing possible concentration efficiency and the light spot size, and examples of systems are given designed with taking into account theoretical analysis results. Factors are discussed that define the choice of the required configuration. The results have shown the good agreement between the theoretical approach and practical design results. The optical elements designed as examples showed the high optical efficiency (near 90 %), thus such approach can be used for designing the LED optical systems for efficient light flux concentration, for example operating with fiber bundle as needed in some optical — electronic devices.
An overview of methods for obtaining 2D and 3D models of defects on the pavement is given. The integrity of the pavement can be affected by factors such as temperature, humidity, weathering and loads. Potholes are one of the most common types of pavement failure. These defects are the signs of structural failures in an asphalt road. The process of collecting and analyzing data is critical to pavement maintenance. Finding and quantifying pothole geometry information is essential to understand road maintenance forecasts and to determine the right asphalt maintenance strategies. Visual detection of road defects is costly and time consuming. Today, there are quite a lot of studies in the scientific literature showing methods for automatic detection and recognition of potholes. In our work, we consider methods for automatic detection and classification of potholes using tools — sensors integrated with a positioning system. The technique of processing two-dimensional (2D) images using various methods of machine classification allows you to determine the precise geometry of the pothole. Algorithmic methods such as artificial neural networks, decision trees, support vector machines, and fuzzy classification are used to improve the accuracy of image processing and highlight the edges of potholes. A three-dimensional model of the pothole (3D) can be obtained based on laser scanning data and photogrammetry methods. The paper summarizes various methods and proposed techniques for extracting a 3D pothole model. The results of the work can be used to improve the infrastructure for maintaining road surfaces.


Adaptive control of nonlinear plant with unmatched parametric uncertainties and input saturation
Artem V. Pashenko, Gerasimov Dmitriy N, Paramonov Alexey V., Nikiforov Vladimir O
The problem of adaptive control of a parametrically uncertain nonlinear plant presented in a state-feedback form with unmatched uncertainties and input constraint is considered. The solution is based on adaptive backstepping procedure, in which the virtual controls includes the high-order time derivatives of adjustable parameters calculated with the use of an adaptation algorithm with improved parametric convergence. Analytical design of control with compensation for the influence of input constraints is performed using of a special filter. The approach presented in this paper allows one to design an adaptive controller that ensures the boundedness for all signals in the closed-loop system and provides tracking of the reference signal. Simulation results presented in the MATLAB/Simulink environment illustrate the performance of the presented approach and the acceleration of parametric convergence with an increase of the adaptation coefficient. The plant considered in this work describes a wide class of systems, such as various manipulation robots, technological processes, electromagnetic levitation systems, chemical processes, etc. The proposed control algorithm considers natural for practical applications input constraints and significantly reduces the influence of the regulator parameters tuning speed on the control system transient processes.
The problem of ensuring the security of control systems is an important and urgent problem. It consists of eliminating the impact of failures and attacks on control objects and the environment, etc. Prevention of critical failures is important. The purpose of this study is to analyze the similarities between the consequences of attacks on complex technical systems and failures of these systems. In the course of the work, a hypothesis about the similarity of the impact of failures and information attacks on a complex technical system is presented. Both information attacks and failures cause anomalous dynamics of the control object. Analysis of the deviation of the dynamics of the control object from the normal mode of operation will allow us to detect and isolate information attacks and failures. The paper examines the influence of information attacks on the dynamics of automatic control systems. Comparison of abnormal dynamics of control objects during attacks and device failures is carried out. The similarity of the consequences of information attacks and failures of the control system are analyzed, a method for identifying attacks based on the methods developed for detecting failures is developed. Computer modeling of the influence of information attacks and failures on the control system of a DC motor has been carried out. The simulation results allow making a conclusion about the applicability of the failure detection algorithms for detecting attacks. It is shown that failures and information attacks can lead to dangerous consequences for the control system. It seems relevant to study the intersection of the field of information security and failure detection.
The subject of research is presented as online-estimation of characteristics of DC motors under various loads. The paper is devoted to a modern approach to solve the problem of detecting DC motor failures. Proposed detection method is based on the set of full state Luenberger observers. Isolation scheme uses directional residual set and relationships between fault direction and residual vector. The procedure of synthesizing the fault detection and isolation algorithm for DC motor is designed. This scheme performance is proved with computer modeling of typical DC motor RK 370CA with faults caused by unaccounted force momentum acting on rotor, input voltage disturbance, velocity and current sensors failures. The algorithm correctly defines motor state (fault presence or absence) and also properly isolates fault cause. Proposed method advantage is compared to other solutions based on hardware and timing redundancy, identification and observers lies in the opportunity to detect and isolate faults of input and output signals with trivial synthesis and absence of the need to expand system hardware. Proposed method is applicable to any second order system, and also there is a possibility to use it for higher order systems with the corresponding changing of the equation systems solving for observer synthesis. This algorithm allows realizing online fault isolation and does not require additional measuring which promotes decrease of diagnostic costs, repair and serving time saving, modern accident detection. The results can be applied to DC motor control to increase reliability and to develop DC motor control systems.
Synthesis and implementation of λ-approach of slide control in heat-consumption system
Aleksandr A. Shilin, Victor G. Bukreev, Filipp V. Perevoshchikov
The paper proposes an essentially new approach to synthesis and implementation of dynamic objects with three-position relay control. The approach consists in organization of differentiation procedure on the relay element involved into feedback. We considered synthesis of the relay element feedback in tasks of robust and time optimal control of heat-consumption systems. To demonstrate the effectiveness of the proposed approach, a comparative assessment of the results of modeling heat consumption systems with three-position relay control and a traditional linear–quadratic regulator is presented. We attached transient processes plots of active heat-consumption systems which confirm the effectiveness of the synthesized relay control.


The photocatalytic properties of Ag–AgBr nanostructures formed by low-temperature ion exchange method and followed heat-treatment in bromide sodium-zinc-aluminosilicate glass have been investigated.  Glasses based on Na2O–ZnO–Al2O3–SiO2 system and doped with Sb2O3, Ce2О3 and Br were synthesized. Layers containing silver ions were formed on the surface of sodium-zinc-aluminosilicate glass by the ion exchange method. The glass samples were immersed in a bath containing a melt of nitrate mixture 5AgNO3/95NaNO3 (mol%) at 320 °C for 2 hours. Subsequent heat treatment at 500 °C resulted in a formation of Ag–AgBr nanostructures in the surface layer. The photocatalytic properties of the Ag–AgBr nanostructures on the glass surface were measured by the decomposition of the methyl orange dye. A comprehensive study of the spectral and photocatalytic properties of Ag–AgBr nanostructures has been carried out. It was shown that, after ion exchange and heat treatment, the AgBr crystal shells with a size of 6 nm were formed around silver nanoparticles. It has been established that the presence of a photocatalyst with Ag–AgBr nanostructures in the surface layer of glass under ultraviolet irradiation leads to degradation of the methyl orange dye by 77 %. Reducing the thickness of the ion-exchange layer to 5 μm by chemical etching decreased the degradation efficiency of the methyl orange dye to 15 %. The results of the work can be applied in devices for the photocatalytic decomposition of water to produce hydrogen.


Mechanization of pomset languages in the Coq proof assistant for the specification of weak memory models
Evgenii A. Moiseenko, Vladimir P. Gladstein, Anton V. Podkopaev, Dmitry V. Koznov
Memory models define semantics of concurrent programs operating on shared memory. Theory of these models is an active research topic. As new models emerge, the problem of providing a rigorous formal specification of these models becomes relevant. In this paper we consider a problem of formalizing memory models in the interactive theorem proving systems. We use semantic domain of pomset languages to formalize memory models. We propose an encoding of pomset languages using quotient types and discuss advantages and shortcomings of this approach. We present a library developed in the Coq proof assistant that implements the proposed approach. As a part of this library, we establish a connection between pomset languages and operational semantics defined by labeled transition systems. With the help of this theory, it becomes possible to define in Coq memory models parameterized by the operational semantics of an abstract datatype, and thus to decouple the definition of a memory model from the definition of the datatype. The proposed library can be used to develop formal machine-checked specifications of a wide class of memory models. In order to demonstrate its applicability, we present specifications of several basic memory models: sequential, causal, and pipelined consistency.
Cloud-based intelligent monitoring system to implement mask violation detection and alert simulation
Vattumilli Komal Venugopal, Lalith Movva , Arun Kumar Thangavelu , Jayashree Jayaraman , Vijayashree Jayaraman
The importance of wearing a mask in public places came to light when the COVID-19 pandemic has started due to the coronavirus. To strictly control the spread of the virus, wearing a mask is mandatory to avoid getting the virus through others or spreading the virus to others if we are carrying it. Since it’s not possible to check each individual in public places whether he/she is wearing a mask, this paper proposed a face mask detection using Deep Learning (DL) and Convolutional Neural Network (CNN) techniques. A cloud-based approach that adopted DL is used to identify the persons violating the rules. The dataset used in the work is collected from various studies, such as Prajnasb/observations and Kaggle’s Face Mask Detection Dataset that contains images of people wearing and not wearing masks. The faces in the images will be detected and cropped with the help of a trained face detector which will be used for checking whether the face in the image is wearing a mask or not. Face mask detection is done with the help of CNN. The input image is fed into the CNN and the output is binary format, whether person wearing or not wearing a mask. The work uses Max Pooling and Average Pooling layers of CNN. The outcome of the work shows that the proposed method achieves 98 % of accuracy using Max Pooling which is better than the currently available works.
In blockchain, ensuring integrity of data when updating distributed ledgers is a challenging and very fundamental process. Most of blockchain networks use Merkle tree to verify the authenticity of data received from other peers on the network. However, creating Merkle tree for each block in the network and composing Merkle branch for every transaction verification request are time-consuming process requiring heavy computations. Moreover, sending these data through the network generates a lot of traffic. Therefore, we proposed an updated mechanism that uses incremental hash chain with probabilistic filter to verify block data, provide a proof of data integrity and efficiently update blockchain light nodes. In this article, we prove that our model provides better performance and less required computations than Merkle tree while maintaining the same security level.
Method for generating masks on face images and systems for their recognition
Kukharev Georgy A, Ryumina Elena V., Nikita A. Shulgin
The problem of masked face recognition is investigated. It is shown that real masks of various shapes, textures and colors have become a problem for state-of-the-art face recognition systems. A reason for this is the lack of the necessary real training datasets. Creation of new data based on simple methods of forming masks on face images could solve this problem. An original method is proposed including the generation of various types, shapes, and colors of masks directly on the original texture of face images. The formation of the masks on the faces of individuals, on faces in group photos, and in scenes with streams of people was taken into account. Based on 100 original face images from the CUHK Face Sketch Database, a test database was created that includes more than 20,000 masked faces images which available for use. Experiments were carried out to recognize faces from the test database within the implemented four systems, among which three are state-of-the-art systems based on “deep learning” and one is deterministic system based on the cosine-transform. The performance of these systems was evaluated, the obtained results of masked face recognition were interpreted, and the masks that were a problem for selected four systems were noted. The proposed mask generation method can be used to create corpora and test databases of images with masks. The obtained results will be useful to researchers and specialists in the field of image processing and analysis.
Improving sign language processing via few-shot machine learning
Grigory F. Shovkoplias, Dmitriy A. Strokov, Daniil V. Kasantsev, Alexandra S. Vatyan, Arip A. Asadulaev, Ivan V. Tomilov, Shalyto Anatoly A., Gusarova Natalya Fedorovna
Improving the efficiency of communication of deaf and hard of hearing people by processing sign language using artificial intelligence is an important task both socially and technologically. One of the ways to solve this problem is a fairly cheap and accessible marker method. The method is based on the registration of electromyographic (EMG) muscle signals using bracelets worn on the arm. To improve the quality of recognition of gestures recorded by the marker method, a modification of the marker method is proposed — duplication of EMG sensors in combination with a low-frame machine learning approach. We experimentally study the possibilities of improving the quality of processing of sign language by duplicating EMG sensors as well as by reducing the volume of the dataset required for training machine learning tools. In the latter case, we compare several technologies of the few-shot approach. Our experiments show that training with few-shot neural nets on 56k samples we can achieve better results than training on random forest with 160k samples. The use of a minimum number of sensors in combination with few-shot signal processing techniques provides the possibility of organizing quick and cost-effective interaction with people with hearing and speech disabilities.
The paper reports a method for compressed representation of matrix data on the principles of quantum theory. The method is formalized as complex-valued matrix factorization based on standard singular value decomposition. The developed approach establishes a bridge between standard methods of semantic data analysis and quantum models of cognition and decision. According to the quantum theory, real-valued observable quantities are generated by wavefunctions being complex-valued vectors in multidimensional Hilbert-space. Wavefunctions are defined as superpositions of basis vectors encoding composition of semantic factors. Basis vectors are found by singular value decomposition of the initial data matrix transformed to a real-valued amplitude form. Phase-dependent superposition amplitudes are found to optimize approximation of the source data. The resulting model represents the observed real-valued data as generated from a small number of basis wavefunctions superposed with complex-valued coefficients. The method is tested for random matrices of sizes from 3 × 3 to 12 × 12 and dimensionality of latent Hilbert-space from 2 to 4. The best approximation is achieved by encoding latent factors in normalized complex-valued amplitude vectors interpreted as wavefunctions generating the data. In terms of approximation fitness, the developed method surpasses standard truncated SVD of the same dimensionality. The mean advantage over the considered range of parameters is 22 %. The method permits cognitive interpretation in accord with the existing quantum models of cognition and decision. The method can be integrated in the algorithms of semantic data analysis including natural language processing. In these tasks, the obtained improvement of approximation translates to the increased precision of similarity measures, principal component analysis, advantage in classification, and document ranking methods. Integration with quantum models of cognition and decision is expected to boost methods of artificial intelligence and machine learning improving imitation of natural thinking.
Modelling of basic Indonesian Sign Language translator based on Raspberry Pi technology
Umi Fadlilah, Raden A.R. Prasetyo, Abd K. Mahamad, Bana Handaga, Sharifah Saon, Endah Sudarmilah
Deaf people have hearing loss from mild to very severe. Such people have difficulty processing language information both with and without hearing aids. Deaf people who do not use hearing aids use sign language in their everyday conversations. At the same time, it is difficult for general people to communicate with the deaf, so in order to communicate with the deaf they must know sign language. There are two sign languages in Indonesia, namely SIBI (Indonesian Sign Language System) and BISINDO (Indonesian Sign Language). To help with communication between deaf and normal people, we developed a model using the one-handed SIBI method as an example, and then further developed it using the one-handed and two-handed BISINDO. The main function of the method is the recognition of basic letters, words, sentences and numbers using a Raspberry Pi single-board computer and a camera which are designed to detect the movements of language gestures. With the help of a special program, images are translated into text on the monitor screen. The method used is image processing and machine learning using the Python programming language and Convolutional Neural Network techniques. The device prototype issues a warning to repeat the sign language if the translation fails, and delete the translation if it doesn’t match the database. The prototype of the device requires further development providing its flexibility: to provide reading of dynamic movements, facial expressions, to provide translation of words not included in the existing database. You need to add a database other than SIBI, such as BISINDO, or sign languages from other regions or countries.
This paper aims to investigate the possibility of robustness enhancement as applied to an automatic system for isolated signs and sign languages recognition, through the use of the most informative spatiotemporal visual features. The authors present a method for the automatic recognition of gestural information, based on an integrated neural network model, which analyses spatiotemporal visual features: 2D and 3D distances between the palm and the face; the area of the hand and the face intersection; hand configuration; the gender and the age of signers. A 3DResNet-18-based neural network model for hand configuration data extraction was elaborated. Deepface software platform neural network models were embedded in the method in order to extract gender and age-related data. The proposed method was tested on the data from the multimodal corpus of sign language elements TheRuSLan, with the accuracy of 91.14 %. The results of this investigation not only improve the accuracy and robustness of machine sign language translation, but also enhance the naturalness of human-machine interaction in general. Besides that, the results have application in various fields of social services, medicine, education and robotics, as well as different public service centers.
In current digital climate, education sector is evolving as the computer technology advances. Education is being digitized: online classes, online examination methods are conducted, etc. During examination, students are assessed by their answers having given for the question set by a teacher. Today many tools are available to assess the performance of a student using multi choice questions tools which provide instant evaluation, but there are available very limited and operational tools where subjective type answer of students are evaluated. This paper presents a web-based application to address this challenge. It automates the process of subjective answers checking and generates results through using natural language processing methods, like keyword matching semantic, lexical analysis and cosine similarity. Experiments show that appreciated by the teacher result and the system estimation does not have much difference which signifies that the system evaluates answers with a 97 % accuracy. The presented system not only reduces manpower but also eliminates the traditional method of conducting exclusively subjective exams using paper documents. It also eliminates the delays in the paper checking, result generation process. The cases of information leak are being reduced and the objectivity of the assessment is being increased.


Based on the principles of structural information analysis of the situation, the ways to build a system of spatially distributed information sensors with functions of reconfiguring the structure and changing the composition in accordance with monitoring tasks and conditions are justified. Using the method of indeterminate Lagrange multipliers, a procedure has been developed for rationally selecting the configuration of a sensor system to achieve stability and reliability of control over a given area of space. Experimental estimates of the determining location accuracy of objects in spatially distributed passive radar systems are obtained when azimuth-angular direction finders and signal detectors are used as information sensors. Regularities of accuracy increase due to selection of number and positions of information sensors are revealed.
The article proposes a new method of monitoring the infiltration processes developing inside the body of hydraulic structures. The method is based on the use of DAS (distributed acoustic sensing) fiber-optic technology which provides high spatial continuity of hydraulic structure seismoacoustic field analysis; digital twin infiltration dynamics and efficient signal processing methods based on machine learning. As a distributed sensor of the object’s seismoacoustic field, a DAS system is used the fiber-optic sensor of which is installed inside the body of the structure according to the principle of maximum coverage. The infiltration activity inside the structure body is estimated based on the analysis of an infiltration flows ensemble which are detected and classified by machine learning (ML) methods. These infiltration flows are sources of seismoacoustic emission and are therefore confidently detected by the DAS system. A digital twin of the infiltration dynamics based on the equations of mathematical physics is used as the normal basis for estimating the current state of fluid activity in the body of the structure. The risk of a structure failure under the influence of the observed infiltration flow is estimated within the framework of the proposed formal method based on the digital twin data. Based on the analysis of Data Set, consisting of real signals of infiltration processes, the high efficiency of detection and classification of this type of signals with the special ML-classifier included in the monitoring system is proved. A digital twin model of the infiltration processes dynamics in the body of a hydraulic structure is proposed. On the basis of the digital twin model, a method for estimating the risk of damage to the body of a hydraulic structure, which may occur as a result of the observed infiltration activity, is proposed. The method of controlling infiltration processes inside hydraulic structures can be used to monitor the operational condition of almost any hydraulic structures, including those in the cryolithic zone.
Waveguide structures have got popularity because of its extensive application in radar system of naval ships and aircrafts. Waveguide models provide high probability of small target detection and reduce rate of false target detection. There are a large number of studies on the waveguide slotted in the wide wall. Researches concerning the narrow wall of the waveguide are much less known. An edge slotted waveguide antenna array based on semicircular end of inclined slots radiating waveguide is proposed. Length of the inclined slot is extended to the adjacent broad wall with semicircular cutting. This extended length increases the resonant length and hence higher gain is obtained. Semicircular cutting at the end of the slot reduces cross-polarization component hence side lobe level obtained are low. Narrow wall inclined slotted waveguide is analyzed and designed to operate in X-band. The radiating slots are etched and rotated alternatively on the broadened top plate with semicircular cutting into the adjacent walls. This technique deletes the radial component of the propagating wave and adds the axial component of the propagating wave. Semicircular cutting increases the resonant length and enhances the gain of the antenna. Designed waveguide structure provides high gain, and cross-polarization component is minimized. Gain of 26 dB is obtained from the simulation results obtained in HFSS (High frequency Software Simulation) and side lobe level obtained is around 20 dB while hardware design provides the gain of 24.5 dB measured on VNA (Vector Network Analyzer) keeping the side lobe level minimum
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.