Proceedings of the National Conference on Artificial Intelligence (AAAI), 2008.
Abstract
Machine learning systems are deployed in many adversarial conditions like intrusion detection, where a classifier has to decide whether a sequence of actions come from a legitimate user or not. However, the attacker, being an adversarial agent, could reverse engineer the classifier and successfully masquerade as a legitimate user. In this paper, we propose the notion of a Proactive Intrusion Detection System (IDS) that can counter such attacks by incorporating feedback into the process. A proactive IDS influences the user's actions and observes them in different situations to decide whether the user is an intruder. We present a formal analysis of proactive intrusion detection and extend the adversarial relationship between the IDS and the attacker to present a game theoretic analysis. Finally, we present experimental results on real and synthetic data that confirm the predictions of the analysis.
Links
- Link to paper
- See on Google Scholar
Bib Entry
@inproceedings{liebald2008proactive, author = {Liebald, Benjamin and Roth, Dan and Shah, Neelay and Srikumar, Vivek}, title = {{Proactive Intrusion Detection}}, booktitle = {Proceedings of the National Conference on Artificial Intelligence (AAAI)}, year = {2008} }