Towards a General Model for Intrusion Detection: An Exploratory Study
Research Area: | Uncategorized | Year: | 2022 | ||
---|---|---|---|---|---|
Type of Publication: | In Proceedings | Keywords: | Intrusion Detection, General Model, Transferability, Machine Learning, Feature Learning | ||
Authors: | Tommaso Zoppi; Andrea Ceccarelli; Andrea Bondavalli | ||||
Book title: | Machine Learning for CyberSecurity (MLCS 2022) | ||||
BibTex: |
|||||
Abstract: | Exercising Machine Learning (ML) algorithms to detect intrusions is nowa-days the de-facto standard for data-driven detection tasks. This activity re-quires the expertise of the researchers, practitioners, or employees of compa-nies that also have to gather labeled data to learn and evaluate the model that will then be deployed into a specific system. Reducing the expertise and time required to craft intrusion detectors is a tough challenge, which in turn will have an enormous beneficial impact in the domain. This paper conducts an exploratory study that aims at understanding to which extent it is possi-ble to build an intrusion detector that is general enough to learn the model once and then be applied to different systems with minimal to no effort. Therefore, we recap the issues that may prevent building general detectors and propose software architectures that have the potential to overcome them. Then, we perform an experimental evaluation using several binary ML classifiers and a total of 16 feature learners on 4 public attack datasets. Re-sults show that a model learned on a dataset or a system does not generalize well as is to other datasets or systems, showing poor detection performance. Instead, building a unique model that is then tailored to a specific dataset or system may achieve good classification performance, requiring less data and far less expertise from the final user. Exercising Machine Learning (ML) algorithms to detect intrusions is nowa-days the de-facto standard for data-driven detection tasks. This activity re-quires the expertise of the researchers, practitioners, or employees of compa-nies that also have to gather labeled data to learn and evaluate the model that will then be deployed into a specific system. Reducing the expertise and time required to craft intrusion detectors is a tough challenge, which in turn will have an enormous beneficial impact in the domain. This paper conducts an exploratory study that aims at understanding to which extent it is possi-ble to build an intrusion detector that is general enough to learn the model once and then be applied to different systems with minimal to no effort. Therefore, we recap the issues that may prevent building general detectors and propose software architectures that have the potential to overcome them. Then, we perform an experimental evaluation using several binary ML classifiers and a total of 16 feature learners on 4 public attack datasets. Re-sults show that a model learned on a dataset or a system does not generalize well as is to other datasets or systems, showing poor detection performance. Instead, building a unique model that is then tailored to a specific dataset or system may achieve good classification performance, requiring less data and far less expertise from the final user. |
||||
Full text:
![]() |