I. A. Zikratov, T. V. Zikratova, I. S. Lebedev, A. V. Gurtov

Read the full article  ';
Article in Russian


The problem of mechanisms design for protection of multi-agent robotics systems from attacks of robots-saboteurs is considered. Functioning analysis of these systems with decentralized control is carried out. The type of the so-called soft attacks using interception of messages, misinformation formation and transmission to group of robots which are also realizing other actions without identified signs of invasion of robots-saboteurs. Analysis of existing information security models of the system based on the trust level computation, calculated in the process of agents’ interaction is carried out. Information security model is offered in which robots-agents produce the trust levels to each other on the basis of situation analysis emerging on a certain step of iterative algorithm with usage of onboard sensor devices. On the basis of calculated trust levels, recognition of “saboteur” objects in the group of legitimate robots-agents is done. For measure of likeness (adjacency) increase for objects from the same category (“saboteur” or “legitimate agent”), calculation algorithm for agents reputation is offered as a measure of public opinion about qualities of this or that agent-subject. Implementation alternatives of the algorithms for detection of saboteurs on the example of the basic algorithm for distribution of purposes in the group of robots are considered.

Keywords: information security, group of robots, multi-agent robotics systems, attack, vulnerability, information security model (IT security model)

 1.          Higgins F., Tomlinson A., Martin K.M. Threats to the swarm: Security considerations for swarm robotics. International Journal on Advances in Security, 2009, vol. 2, no. 2&3, pp. 288–297.
2.          Zikratov I.A., Kozlova E.V., Zikratova T.V. Analiz uyazvimostei robototekhnicheskikh kompleksov s roevym intellektom [Vulnerability analysis of robotic systems with swarm intelligence]. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2013, no. 5 (87), pp. 149–154.
3.          Karnik N.M., Tripathi A.R. Security in the Ajanta mobile agent system. Software - Practice and Experience, 2001, vol. 31, no. 4, pp. 301–329. doi: 10.1002/spe.364
4.          Sander T., Tschudin Ch.F. Protecting mobile agents against malicious hosts. Mobile Agents and Security, Ser. Lecture Notes in Computer Science, 1998, vol. 1419, pp. 44–60.
5.          Xudong G., Yiling Ya., Yinyuan Y. POM-a mobile agent security model against malicious hosts. Proc. of the 4th International Conference on High Performance Computing in the Asia-Pacific Region, 2000, vol. 2, pp. 1165–1166.
6.          Page J., Zaslavsky A., Indrawan M. A buddy model of security for mobile agent communities operating in pervasive scenarios.Proceedings of 2nd Australasian Information Security Workshop (AISW2004). ACS Dunedin, New Zealand, 2004, vol. 32, pp. 17–25.
7.          Page J., Zaslavsky A., Indrawan M. Countering security vulnerabilities using a shared security buddy model schema in mobile agent communities.Proc. of the 1st International Workshop on Safety and Security in Multi-Agent Systems (SASEMAS 2004), 2004, pp. 85–101.
8.          Schillo M., Funk P., Rovatsos M. Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence, 2000, vol. 14, no. 8, pp. 825–848. doi: 10.1080/08839510050127579
9.          Golbeck J., Parsia B., Hendler J. Trust networks on the semantic web. Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), 2003, vol. 2782, pp. 238–249
10.       Garcia-Morchon O., Kuptsov D., Gurtov A., Wehrle K. Cooperative security in distributed networks. Computer Communications, 2013, vol. 36, no. 12, pp. 1284–1297. doi: 10.1016/j.comcom.2013.04.007
11.       Beshta A.A., Kirpo M.A. Postroenie modeli doveriya k ob"ektam avtomatizirovannoi informatsionnoi sistemy dlya predotvrashcheniya destruktivnykh vozdeistvii na sistemu [Construction of object trust model in the automated information system for preventing destructive influence on the system]. Bulletin of the Tomsk Polytechnic University, 2013, vol. 322, no. 5, pp. 104–108.
12.       Ramchurn S.D., Huynh D., Jennings N.R. Trust in multi-agent systems. Knowledge Engineering Review, 2004, vol. 19, no. 1, pp. 1–25. doi: 10.1017/S0269888904000116
13.       Gorodetski V., Kotenko I., Karsaev O. Multi-agent technologies for computer network security: Attack simulation, intrusion detection and intrusion detection learning. Computer Systems Science and Engineeting, 2003, no. 4, pp. 191–200.
14.       Kalyaev I.A., Gaiduk A.R., Kapustyan S.G. Modeli i algoritmy kollektivnogo upravleniya v gruppakh robotov [Models and algorithms of the collective control of robots group]. Moscow, FIZMATLIT Publ., 2009, 280 p.
15.       Zikratov I.A., Zikratova T.V., Lebedev I.S. Doveritel'naya model' informatsionnoi bezopasnosti mul'tiagentnykh robototekhnicheskikh sistem s detsentralizovannym upravleniem [Trust model for information security of multi-agent robotic systems with a decentralized management]. Scientificand Technical Journal of Information Technologies, Mechanics and Optics, 2014, no. 2(90), pp. 47–52.
16.       Koval E.N., Lebedev I.N. Obshchaya model' bezopasnosti robototekhnicheskikh sistem [General model of robotic systems information security]. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2013, no. 4(86), pp. 153–154.
17.       Carter J., Bitting E., Ghorbani A.A. Reputation formalization for an information-sharing multi-agent system. Computational Intelligence, 2002, vol. 18 (2), pp. 515–534.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.