TRUST MODEL FOR INFORMATION SECURITY OF MULTI-AGENT ROBOTIC SYSTEMS WITH A DECENTRALIZED MANAGEMENT
Read the full article
The paper deals with the issues on protection of multi-agent robotic systems against attacks by robots-saboteurs. The operation analysis of such systems with decentralized control is carried out. Concept of harmful information impact (attack) from a robot-saboteur to the multi-agent robotic system is given. The class of attacks is considered using interception of messages, formation and transfer of misinformation to group of robots, and also carrying out other actions with vulnerabilities of multiagent algorithms without obviously identified signs of invasion of robots-saboteurs. The model of information security is developed, in which robots-agents work out trust levels to each other analyzing the events occurring in the system. The idea of trust model consists in the analysis of transferred information by each robot and the executed actions of other members in a group, comparison of chosen decision on iteration step k with objective function of the group. Distinctive feature of the trust model in comparison with the closest analogue - Buddy Security Model in which the exchange between the agents security tokens is done — is involvement of the time factor during which agents have to "prove" by their actions the usefulness in achievement of a common goal to members of the group. Variants of this model realization and ways of an assessment of trust levels for agents in view of the security policy accepted in the group are proposed.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License