1,724 research outputs found

    Information Fusion for Anomaly Detection with the Dendritic Cell Algorithm

    Get PDF
    Dendritic cells are antigen presenting cells that provide a vital link between the innate and adaptive immune system, providing the initial detection of pathogenic invaders. Research into this family of cells has revealed that they perform information fusion which directs immune responses. We have derived a Dendritic Cell Algorithm based on the functionality of these cells, by modelling the biological signals and differentiation pathways to build a control mechanism for an artificial immune system. We present algorithmic details in addition to experimental results, when the algorithm was applied to anomaly detection for the detection of port scans. The results show the Dendritic Cell Algorithm is sucessful at detecting port scans.Comment: 21 pages, 17 figures, Information Fusio

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Benign Adversarial Attack: Tricking Models for Goodness

    Full text link
    In spite of the successful application in many fields, machine learning models today suffer from notorious problems like vulnerability to adversarial examples. Beyond falling into the cat-and-mouse game between adversarial attack and defense, this paper provides alternative perspective to consider adversarial example and explore whether we can exploit it in benign applications. We first attribute adversarial example to the human-model disparity on employing non-semantic features. While largely ignored in classical machine learning mechanisms, non-semantic feature enjoys three interesting characteristics as (1) exclusive to model, (2) critical to affect inference, and (3) utilizable as features. Inspired by this, we present brave new idea of benign adversarial attack to exploit adversarial examples for goodness in three directions: (1) adversarial Turing test, (2) rejecting malicious model application, and (3) adversarial data augmentation. Each direction is positioned with motivation elaboration, justification analysis and prototype applications to showcase its potential.Comment: ACM MM2022 Brave New Ide

    Information Fusion for Anomaly Detection with the Dendritic Cell Algorithm

    Get PDF
    Dendritic cells are antigen presenting cells that provide a vital link between the innate and adaptive immune system, providing the initial detection of pathogenic invaders. Research into this family of cells has revealed that they perform information fusion which directs immune responses. We have derived a Dendritic Cell Algorithm based on the functionality of these cells, by modelling the biological signals and differentiation pathways to build a control mechanism for an artificial immune system. We present algorithmic details in addition to experimental results, when the algorithm was applied to anomaly detection for the detection of port scans. The results show the Dendritic Cell Algorithm is successful at detecting port scans

    Person Re-identification And An Adversarial Attack And Defense For Person Re-identification Networks

    Get PDF
    Person re-identification (ReID) is the task of retrieving the same person, across different camera views or on the same camera view captured at a different time, given a query person of interest. There has been great interest and significant progress in person ReID, which is important for security and wide-area surveillance applications as well as human computer interaction systems. In order to continuously track targets across multiple cameras with disjoint views, it is essential to re-identify the same target across different cameras. This is a challenging task due to several reasons including changes in illumination and target appearance, and variations in camera viewpoint and camera intrinsic parameters. Brightness transfer function (BTF) was introduced for inter-camera color calibration, and to improve the performance of person ReID approaches. In this dissertation, we first present a new method to better model the appearance variation across disjoint camera views. We propose building a codebook of BTFs, which is composed of the most representative BTFs for a camera pair. We also propose an ordering and trimming criteria, based on the occurrence percentage of codeword triplets, to avoid using all combinations of codewords exhaustively for all color channels, and improve computational efficiency. Then, different from most existing work, we focus on a crowd-sourcing scenario to find and follow person(s) of interest in the collected images/videos. We propose a novel approach combining R-CNN based person detection with the GPU implementation of color histogram and SURF-based re-identification. Moreover, GeoTags are extracted from the EXIF data of videos captured by smart phones, and are displayed on a map together with the time-stamps. With the recent advances in deep neural networks (DNN), the state-of-the-art performance of person ReID has been improved significantly. However, latest works in adversarial machine learning have shown the vulnerabilities of DNNs against adversarial examples, which are carefully crafted images that are similar to original/benign images, but can deceive the neural network models. Neural network-based ReID approaches inherit the vulnerabilities of DNNs. We present an effective and generalizable attack model that generates adversarial images of people, and results in very significant drop in the performance of the existing state-of-the-art person re-identification models. The results demonstrate the extreme vulnerability of the existing models to adversarial examples, and draw attention to the potential security risks that might arise due to this in video surveillance. Our proposed attack is developed by decreasing the dispersion of the internal feature map of a neural network. We compare our proposed attack with other state-of-the-art attack models on different person re-identification approaches, and by using four different commonly used benchmark datasets. The experimental results show that our proposed attack outperforms the state-of-art attack models on the best performing person re-identification approaches by a large margin, and results in the most drop in the mean average precision values. We then propose a new method to effectively detect adversarial examples presented to a person ReID network. The proposed method utilizes parts-based feature squeezing to detect the adversarial examples. We apply two types of squeezing to segmented body parts to better detect adversarial examples. We perform extensive experiments over three major datasets with different attacks, and compare the detection performance of the proposed body part-based approach with a ReID method that is not parts-based. Experimental results show that the proposed method can effectively detect the adversarial examples, and has the potential to avoid significant decreases in person ReID performance caused by adversarial examples

    Threats and Defenses in SDN Control Plane

    Get PDF
    abstract: Network Management is a critical process for an enterprise to configure and monitor the network devices using cost effective methods. It is imperative for it to be robust and free from adversarial or accidental security flaws. With the advent of cloud computing and increasing demands for centralized network control, conventional management protocols like Simple Network Management Protocol (SNMP) appear inadequate and newer techniques like Network Management Datastore Architecture (NMDA) design and Network Configuration (NETCONF) have been invented. However, unlike SNMP which underwent improvements concentrating on security, the new data management and storage techniques have not been scrutinized for the inherent security flaws. In this thesis, I identify several vulnerabilities in the widely used critical infrastructures which leverage the NMDA design. Software Defined Networking (SDN), a proponent of NMDA, heavily relies on its datastores to program and manage the network. I base my research on the security challenges put forth by the existing datastore’s design as implemented by the SDN controllers. The vulnerabilities identified in this work have a direct impact on the controllers like OpenDayLight, Open Network Operating System and their proprietary implementations (by CISCO, Ericsson, RedHat, Brocade, Juniper, etc). Using the threat detection methodology, I demonstrate how the NMDA-based implementations are vulnerable to attacks which compromise availability, integrity, and confidentiality of the network. I finally propose defense measures to address the security threats in the existing design and discuss the challenges faced while employing these countermeasures.Dissertation/ThesisMasters Thesis Computer Science 201
    corecore