30,940 research outputs found

    Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

    Full text link
    The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM Conference on Web Science, June 30-July 3, 2019, Boston, U

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Adversarial Detection of Flash Malware: Limitations and Open Issues

    Full text link
    During the past four years, Flash malware has become one of the most insidious threats to detect, with almost 600 critical vulnerabilities targeting Adobe Flash disclosed in the wild. Research has shown that machine learning can be successfully used to detect Flash malware by leveraging static analysis to extract information from the structure of the file or its bytecode. However, the robustness of Flash malware detectors against well-crafted evasion attempts - also known as adversarial examples - has never been investigated. In this paper, we propose a security evaluation of a novel, representative Flash detector that embeds a combination of the prominent, static features employed by state-of-the-art tools. In particular, we discuss how to craft adversarial Flash malware examples, showing that it suffices to manipulate the corresponding source malware samples slightly to evade detection. We then empirically demonstrate that popular defense techniques proposed to mitigate evasion attempts, including re-training on adversarial examples, may not always be sufficient to ensure robustness. We argue that this occurs when the feature vectors extracted from adversarial examples become indistinguishable from those of benign data, meaning that the given feature representation is intrinsically vulnerable. In this respect, we are the first to formally define and quantitatively characterize this vulnerability, highlighting when an attack can be countered by solely improving the security of the learning algorithm, or when it requires also considering additional features. We conclude the paper by suggesting alternative research directions to improve the security of learning-based Flash malware detectors

    Detecting Extrasolar Planets with Integral Field Spectroscopy

    Get PDF
    Observations of extrasolar planets using Integral Field Spectroscopy (IFS), if coupled with an extreme Adaptive Optics system and analyzed with a Simultaneous Differential Imaging technique (SDI), are a powerful tool to detect and characterize extrasolar planets directly; they enhance the signal of the planet and, at the same time, reduces the impact of stellar light and consequently important noise sources like speckles. In order to verify the efficiency of such a technique, we developed a simulation code able to test the capabilities of this IFS-SDI technique for different kinds of planets and telescopes, modelling the atmospheric and instrumental noise sources. The first results obtained by the simulations show that many significant extrasolar planet detections are indeed possible using the present 8m-class telescopes within a few hours of exposure time. The procedure adopted to simulate IFS observations is presented here in detail, explaining in particular how we obtain estimates of the speckle noise, Adaptive Optics corrections, specific instrumental features, and how we test the efficiency of the SDI technique to increase the signal-to-noise ratio of the planet detection. The most important results achieved by simulations of various objects, from 1 M_J to brown dwarfs of 30 M_J, for observations with an 8 meter telescope, are then presented and discussed.Comment: 60 pages, 37 figures, accepted in PASP, 4 Tables adde

    Full color hybrid display for aircraft simulators

    Get PDF
    A full spectrum color monitor, connected to the camera and lens system of a television camera supported by a gantry frame over a terrain model simulating an aircraft landing zone, projects the monitor image onto a lens or screen visually accessible to a trainee in the simulator. A digital computer produces a pattern corresponding to the lights associated with the landing strip onto a monochromatic display, and an optical system projects the calligraphic image onto the same lens so that it is superposed on the video representation of the landing field. The optical system includes a four-color wheel which is rotated between the calligraphic display and the lens, and an apparatus for synchronizing the generation of a calligraphic pattern with the color segments on the color wheel. A servo feedback system responsive to the servo motors on the gantry frame produces an input to the computer so that the calligraphically generated signal corresponds in shape, size and location to the video signal
    • 

    corecore