Learning for security applications is an emerging field where adaptiveapproachesareneededbutarecomplicatedbychanging adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination. Categories andSubjectDescriptor
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.