Skip to main content
Article thumbnail
Location of Repository

Understanding the Risk Factors of Learning in Adversarial Environments

By Blaine Nelson, Wilhelm Schickard, Battista Biggio and Pavel Laskov


Learning for security applications is an emerging field where adaptiveapproachesareneededbutarecomplicatedbychanging adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination. Categories andSubjectDescriptor

Topics: Computer Security, Machine Learning, Statistical Learning, Robust Classification
Year: 2014
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.