Skip to main content

The Art and Science of Adversarial Machine Learning

Dr. Yevgeniy Vorobeychik
When: July 24, 2017 @ 11:00am - 12:00pm
Location: RTH (Tutor Hall) 217
Add to calendar

RSVP to Hailey at hwinetro@usc.edu by July 20th.

ABSTRACT

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to either evade the classifiers deployed to detect them, or degrade the quality of the data used to train the classifiers. I will discuss our recent research into the problem of adversarial classifier evasion.  In particular, I will describe theoretical foundations of black-box attacks on classifiers, and several of our efforts in designing evasion-robust classifiers on binary feature spaces, including a principled, theoretically-grounded, retraining method.

 

In the second part of the talk I will discuss scientific foundations of classifier evasion modeling.  A dominant paradigm in the machine learning community is to model evasion in “feature space”, that is, through direct manipulation of classifier features.  In contrast, the cyber security community developed several “problem space” attacks, where actual instances (such as malware) are modified, and features are then extracted from the evasive instances.  I will show through a case study of PDF malware detection that feature-space models are in fact a very poor proxy for problem space attacks.  I will then demonstrate that there exists a simple “fix”: to identify a small set of features which are invariant (conserved) with respect to evasion attacks, and constrain these features to remain unchanged in feature-space models.  I will then show that such conserved features exist, cannot be inferred using standard regularization techniques, but can be automatically identified for a given problem-space evasion model.

BIO

Dr. Yevgeniy Vorobeychik is an Assistant Professor of Computer Science, Computer Engineering, and Biomedical Informatics at Vanderbilt University. Previously, he was a Principal Member of Technical Staff at Sandia National Laboratories. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on adversarial reasoning in AI, computational game theory, security and privacy, network science, and agent-based modeling.  He received an NSF CAREER award in 2017, was nominated for the 2008 ACM Doctoral Dissertation Award, and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.  He was an invited early career spotlight speaker at IJCAI 2016.

FLYER
Share this
Become a USC CAIS partner through community projects, funding, volunteering, or research collaboration.