Attack and Fault Injection in Self-driving Agents on the Carla Simulator -- Experience Report

Research Area: Uncategorized Year: 2021
Type of Publication: In Proceedings
Authors: Niccolò Piazzesi; Massimo Hong; Andrea Ceccarelli
Editor: Habli, Ibrahimand Sujan, Markand Bitsch, Friedemann
Book title: Computer Safety, Reliability, and Security
Pages: 210-225
Address: Cham
ISBN: 978-3-030-83903-1
Machine Learning applications are acknowledged at the foundation of autonomous driving, because they are the enabling technology for most driving tasks. However, the inclusion of trained agents in automotive systems exposes the vehicle to novel attacks and faults, that can result in safety threats to the driving tasks. In this paper we report our experimental campaign on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator. We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety. The paper shows a feasible and easily-reproducible approach based on open source simulator and tools, and the results clearly motivate the need of both protective measures and extensive testing campaigns.

Resilient Computing Lab, 2011

Joomla - Realizzazione siti web