Welcome to the Resilient Computing Lab (RCL)

RCL is a research group of the Department of Mathematics and Informatics (DiMaI, "Dipartimento di Matermatica e Informatica") of the University of Florence. RCL research activities focus mainly in research and experimentation of dependable and secure systems, infrastructures and systems of systems. RCL is currently involved in research spanning the following areas:

  • Design and experimentation of architectures and techniques for resilient, safety-critical and secure systems;
  • Quantitative evaluation of dependability, security and Quality of Service through analytical, simulative and experimental techniques;
  • Applications of Machine Learning Techniques (especially Anomaly Detection) in the domain of Critical Systems.

Details on where we are, who we are, and our projects are available on this website.

For further information, please contact prof. Andrea Bondavalli.

RCL @ Grenoble - Machine Learning for CyberSecurity PDF Print E-mail
Monday, 19 September 2022 00:00

This year, RCL presented a contribution to the Machine Learning for CyberSecurity (MLCS 2022) workshop of the European Conference of Machine Learning (ECML), in Grenoble - France


The paper is entitled "Towards a General Model for Intrusion Detection: An Exploratory Study", and originated many questions and discussions from the audience.

Paper Abstract: Exercising Machine Learning (ML) algorithms to detect intrusions is nowa-days the de-facto standard for data-driven detection tasks. This activity re-quires the expertise of the researchers, practitioners, or employees of compa-nies that also have to gather labeled data to learn and evaluate the model that will then be deployed into a specific system. Reducing the expertise and time required to craft intrusion detectors is a tough challenge, which in turn will have an enormous beneficial impact in the domain. This paper conducts an exploratory study that aims at understanding to which extent it is possi-ble to build an intrusion detector that is general enough to learn the model once and then be applied to different systems with minimal to no effort. Therefore, we recap the issues that may prevent building general detectors and propose software architectures that have the potential to overcome them. Then, we perform an experimental evaluation using several binary ML classifiers and a total of 16 feature learners on 4 public attack datasets. Re-sults show that a model learned on a dataset or a system does not generalize well as is to other datasets or systems, showing poor detection performance. Instead, building a unique model that is then tailored to a specific dataset or system may achieve good classification performance, requiring less data and far less expertise from the final user.


Resilient Computing Lab, 2011

Joomla - Realizzazione siti web