9 research outputs found

    Exploiting Machine Learning to Subvert Your Spam Filter

    Get PDF
    Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1 % of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.

    Dynamic Memory Allocation for a Guest Virtual Machine

    Get PDF
    When a virtual machine (VM) runs in the background in an idle state, its memory remains allocated and is unavailable for other host processes or other VMs on the host. This disclosure describes hypervisor-aware virtio ballooning in dynamic host-guest memory allocation. Per the techniques, memory can be dynamically transferred between the guest virtual machines and their host. The inflation operation that enables dynamic transfer of guest VM memory to the host notifies the hypervisor of pages to be freed and also requests the host kernel to free those pages. The techniques can function even in operating systems that lack a way for a hypervisor to subscribe to notifications of page addition or removal events

    Open problems in the security of learning

    No full text
    Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning
    corecore