Vulnerability detection and attack prevention on a social robot control system that uses deep reinforcement learning

Daniel Giełdowski

supervisor: Wojciech Szynkiewicz



Robots already operate outside the closed IT networks of factories and laboratories. They require appropriate security measures against attacks and their consequences for these reasons. Utilizing deep reinforcement learning make it possible to teach a robot complex tasks, but even small disturbances applied to the signals can lead to unpredictable behaviour of the robot and in consequence, to physical damage of the system or it environment. Social robots are capable of performing multiple tasks, for example helping the elderly or taking care of the disabled. One of the crucial elements of that work is providing them with the sense of security and privacy. Vulnerabilities of the robotic system may lead to situations potentially dangerous to people who interact with the machine.

The first phase of the research included exploring and analyzing the current state-of-the-art of deep reinforcement learning and its vulnerability to cyber-attacks. The next step was defining the most probable types of attacks on the robotic system and the magnitude of their consequences for the functioning of the social robot along with possible security measures employable against them. Currently, the scheme of the possible procedure for vulnerability analysis of the system is presented. The next step of the research assumes the practical evaluation of the procedure in the form of implementation: firstly in the simulated environment and secondly on real hardware.