ARTISAN 2023 - ARTISAN Summer School (Role and effects of ARTificial Intelligence in Secure ApplicatioNs) PDF Print E-mail
Tuesday, 09 May 2023 14:44

The second edition of the ARTISAN Summer School has been announced: see the website

The ARTISAN PhD Summer School (Role and effects of ARTificial  Intelligence in Secure ApplicatioNs) tackles the issues related to  Artificial Intelligence (AI) and Machine Learning (ML) with regards to security and safety applications.

Alongside with his colleagues Oum El-Kheir and Oliver Jung, Andrea Ceccarelli, associate professor at RCL, is organizing the second edition of the ARTISAN Summer School. Prof. Andrea Bondavalli will held a talk titled "Dependability Challenges in Safety-Critical Systems: the adoption of Machine learning", and Dr. Tommaso Zoppi will held a talk titled "Meta-Learning for Intrusion Detection: Teamwork or Fight against each other?".

Keynote at SRDS PDF Print E-mail
Tuesday, 20 September 2022 10:52

The chair of RCL Group, Andrea Bondavalli, just finished his Keynote speech at The 41st International Symposium on Reliable Distributed Systems (SRDS 2022)

Keynote Details

Title: Dependability Challenges in Safety-Critical Systems: the adoption of Machine learning

Abstract: Machine Learning components in safety-critical applications can perform some complex tasks that would be unfeasible otherwise. However, they are also a weak point concerning safety assurance. We will illustrate two specific cases where ML must be incorporated in SCS with much care. One is related to the interactions between machine-learning components and other non-ML components and how they evolve with training of the former. We argue that it is theoretically possible that learning by the Neural Network may reduce the effectiveness of error checkers or safety monitors, creating a major complication for safety assurance. An example on automated driving is shown. Among the results, we observed that indeed improving the Controller could make the Safety Monitor less effective; to a limit where a training increment makes the Controller's own behavior safer but results in the vehicle to be less safe. The other one regards ML algorithms that perform binary classification as error, intrusion or failure detectors. They can be used in SCS provided that their performance complies with SCS safety requirements. However, the performance analysis of MLs relies on metrics that were not developed with safety in mind and consequently may not provide meaningful evidence to decide whether to incorporate a ML into a SCS. We analyze the distribution of misclassifications and thus show how to better assess the adequacy of a given ML.

RCL @ Grenoble - Machine Learning for CyberSecurity PDF Print E-mail
Monday, 19 September 2022 00:00

This year, RCL presented a contribution to the Machine Learning for CyberSecurity (MLCS 2022) workshop of the European Conference of Machine Learning (ECML), in Grenoble - France


The paper is entitled "Towards a General Model for Intrusion Detection: An Exploratory Study", and originated many questions and discussions from the audience.

Paper Abstract: Exercising Machine Learning (ML) algorithms to detect intrusions is nowa-days the de-facto standard for data-driven detection tasks. This activity re-quires the expertise of the researchers, practitioners, or employees of compa-nies that also have to gather labeled data to learn and evaluate the model that will then be deployed into a specific system. Reducing the expertise and time required to craft intrusion detectors is a tough challenge, which in turn will have an enormous beneficial impact in the domain. This paper conducts an exploratory study that aims at understanding to which extent it is possi-ble to build an intrusion detector that is general enough to learn the model once and then be applied to different systems with minimal to no effort. Therefore, we recap the issues that may prevent building general detectors and propose software architectures that have the potential to overcome them. Then, we perform an experimental evaluation using several binary ML classifiers and a total of 16 feature learners on 4 public attack datasets. Re-sults show that a model learned on a dataset or a system does not generalize well as is to other datasets or systems, showing poor detection performance. Instead, building a unique model that is then tailored to a specific dataset or system may achieve good classification performance, requiring less data and far less expertise from the final user.

Keynote Speech at CSR PDF Print E-mail
Thursday, 28 July 2022 00:00

The chair of the group, Andrea Bondavalli, just finished his keynote speech at the IEEE CyberSecurity and Resilience Conference (CSR 2022)

Keynote Details

Title: Intrusion Detection Through (Unsupervised) Machine Learning: Pros, Limitations and Workarounds

Abstract: It is undeniable that new cyber-attacks are continuously crafted against essentially any kind of system and service. Systems are subject to a mix of usual practiced attacks and new ones that were not previously known, motivating the need for building Intrusion Detectors (IDs) that can effectively deal with those zero-day attacks. Different studies have been devised Unsupervised Machine Learning (ML) algorithms belonging to different families as clustering, neural networks, density-based, neighbor-based, statistical, and classification. Those algorithms have the potential to detect even unknown threats thanks to a training phase that does not rely on labels in data. The talk shows how different algorithms are better suited for the detection of specific anomalies of system indicators, which manifest when attacks are conducted against a system. Unfortunately, those algorithms show inferior detection performance of known threats with respect to supervised ML algorithms; to fill this gap, we show improvements achieved when adopting Meta-Learning techniques. In any case, the quality of the best solution that can be devised depends strongly on the problem at hand and demands for high cost for selecting and finding the optimal set up of Unsupervised algorithms. To this end, we conclude the talk by proposing a cheap method to quantitatively understand the achievable results without exercising the full optimization activities.

New Award for Andrea! PDF Print E-mail
Sunday, 24 July 2022 00:00

We just get to know that,

On behalf of the IEEE Systems, Man, and Cybernetics (SMC) Technical Committee on Homeland Security (TCHS), and its award committee, Andrea Bondavalli has been granted with the:


*** IEEE TCHS Outstanding Leadership Award ***

This award recognizes an individual's outstanding leadership contributions to the homeland security community in general and to the field of cyber security and resilience in particular. The award recognizes the leadership and visionary contributions to the society at large through the promotion and applications of security and resilience concepts to a variety of technology, science, and business domains.



RCL @ ARTISAN PDF Print E-mail
Wednesday, 06 July 2022 15:03

Alongside with his colleagues Oum El-Kheir and Oliver Jung, Andrea Ceccarelli, associate professor at RCL, co-organized a summer school entitled ARTISAN: Role and effects of ARTificial Intelligence in Secure ApplicatioNs.

The school had more than 25 PhD students participating and actively contributing to discussions, and is still ongoing in the windy Valence (France). Below you can find photos of talks, some of them provided by other staff at University of Florence.


WhatsApp Image_2022-07-06_at_14.48.39

WhatsApp Image_2022-07-06_at_15.46.54

Annuncio Posizione RTD-A PDF Print E-mail
Friday, 08 October 2021 15:09

Si annuncia che l'università degli Studi di Firenze ha bandito una posizione di ricercatore a tempo determinato di tipo A (RTD-A) nel settore INF/01 Informatica, D.M. 1062/2021 ­ "INNOVAZIONE", con scadenza ore 13:00 del 22 ottobre 2021.

Link al bando:

Dipartimento di Matematica e Informatica "Ulisse Dini"

Titolo del progetto di ricerca: "SafeAI: Integrazione di Intelligenza Artificiale nei sistemi critici per la sicurezza"

Sintesi del progetto di ricerca:

Studio di soluzioni per una Intelligenza Artificiale (IA) sicura (safe), studiando e applicando un modello concettuale, una guida architetturale e una metodologia di progettazione e assessment. Il progetto si propone di i) analizzare sistemi basati su IA studiando le nuove superfici di guasto e attacco offerte da DNN (Deep Neural Network) e GPGPU (General Purpose GPU), ii) definire possibili contromisure sia interne, ovvero definendo meccanismi appropriati per proteggere la DNN e GPGPU da guasti e attacchi, sia a livello di sistema, ovvero garantendo che guasti o attacchi non si propaghino all’esterno del componente, iii) valutare sperimentalmente le soluzioni ottenute.

Responsabile scientifico: Paolo Lollini (per info sul progetto scrivere a This e-mail address is being protected from spambots. You need JavaScript enabled to view it ).

New project started! PDF Print E-mail
Thursday, 03 June 2021 14:46

Blocco loghi_PORCreO


space black

The POR-CREO FESR 2014-2020 SPaCe project recently started!

The Smart passenger Center project researches multimedial solutions to orchestrate surveillance and mobility services. It relies on Artificial Intelligence (AI) to equip operators and transportation authority with instruments for the management of the dynamics of passengers flow.

Within SPaCe, the Resilient Computing Lab will research solutions for the design and evaluation of dependable and secure systems that include intelligent components.

<< Start < Prev 1 2 3 4 Next > End >>

Page 1 of 4
Joomla SEO powered by JoomSEF