16,532 research outputs found

    Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

    Full text link
    Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.Comment: Accepted for publication at the ICCV 2017 Workshop on Vision in Practice on Autonomous Robots (ViPAR

    Deepfakes: False Pornography Is Here and the Law Cannot Protect You

    Get PDF
    It is now possible for anyone with rudimentary computer skills to create a pornographic deepfake portraying an individual engaging in a sex act that never actually occurred. These realistic videos, called “deepfakes,” use artificial intelligence software to impose a person’s face onto another person’s body. While pornographic deepfakes were first created to produce videos of celebrities, they are now being generated to feature other nonconsenting individuals—like a friend or a classmate. This Article argues that several tort doctrines and recent non-consensual pornography laws are unable to handle published deepfakes of non-celebrities. Instead, a federal criminal statute prohibiting these publications is necessary to deter this activity

    ClusterNet: Detecting Small Objects in Large Scenes by Exploiting Spatio-Temporal Information

    Full text link
    Object detection in wide area motion imagery (WAMI) has drawn the attention of the computer vision research community for a number of years. WAMI proposes a number of unique challenges including extremely small object sizes, both sparse and densely-packed objects, and extremely large search spaces (large video frames). Nearly all state-of-the-art methods in WAMI object detection report that appearance-based classifiers fail in this challenging data and instead rely almost entirely on motion information in the form of background subtraction or frame-differencing. In this work, we experimentally verify the failure of appearance-based classifiers in WAMI, such as Faster R-CNN and a heatmap-based fully convolutional neural network (CNN), and propose a novel two-stage spatio-temporal CNN which effectively and efficiently combines both appearance and motion information to significantly surpass the state-of-the-art in WAMI object detection. To reduce the large search space, the first stage (ClusterNet) takes in a set of extremely large video frames, combines the motion and appearance information within the convolutional architecture, and proposes regions of objects of interest (ROOBI). These ROOBI can contain from one to clusters of several hundred objects due to the large video frame size and varying object density in WAMI. The second stage (FoveaNet) then estimates the centroid location of all objects in that given ROOBI simultaneously via heatmap estimation. The proposed method exceeds state-of-the-art results on the WPAFB 2009 dataset by 5-16% for moving objects and nearly 50% for stopped objects, as well as being the first proposed method in wide area motion imagery to detect completely stationary objects.Comment: Main paper is 8 pages. Supplemental section contains a walk-through of our method (using a qualitative example) and qualitative results for WPAFB 2009 datase

    Adversarial Detection of Flash Malware: Limitations and Open Issues

    Full text link
    During the past four years, Flash malware has become one of the most insidious threats to detect, with almost 600 critical vulnerabilities targeting Adobe Flash disclosed in the wild. Research has shown that machine learning can be successfully used to detect Flash malware by leveraging static analysis to extract information from the structure of the file or its bytecode. However, the robustness of Flash malware detectors against well-crafted evasion attempts - also known as adversarial examples - has never been investigated. In this paper, we propose a security evaluation of a novel, representative Flash detector that embeds a combination of the prominent, static features employed by state-of-the-art tools. In particular, we discuss how to craft adversarial Flash malware examples, showing that it suffices to manipulate the corresponding source malware samples slightly to evade detection. We then empirically demonstrate that popular defense techniques proposed to mitigate evasion attempts, including re-training on adversarial examples, may not always be sufficient to ensure robustness. We argue that this occurs when the feature vectors extracted from adversarial examples become indistinguishable from those of benign data, meaning that the given feature representation is intrinsically vulnerable. In this respect, we are the first to formally define and quantitatively characterize this vulnerability, highlighting when an attack can be countered by solely improving the security of the learning algorithm, or when it requires also considering additional features. We conclude the paper by suggesting alternative research directions to improve the security of learning-based Flash malware detectors

    Regulating Child Sex Robots: Restriction or Experimentation?

    Get PDF
    In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this paper, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Second, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin’s hypothesis has a reasonable prospect of being successfully tested. Third, I argue that Arkin’s hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology

    The PS-Battles Dataset - an Image Collection for Image Manipulation Detection

    Full text link
    The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PS-Battles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102'028 images grouped into 11'142 subsets, each containing the original image as well as a varying number of manipulated derivatives.Comment: The dataset introduced in this paper can be found on https://github.com/dbisUnibas/PS-Battle

    Interdiscursive Readings in Cultural Consumer Research

    Get PDF
    The cultural consumption research landscape of the 21st century is marked by an increasing cross-disciplinary fermentation. At the same time, cultural theory and analysis have been marked by successive ‘inter-’ turns, most notably with regard to the Big Four: multimodality (or intermodality), interdiscursivity, transmediality (or intermediality), and intertextuality. This book offers an outline of interdiscursivity as an integrative platform for accommodating these notions. To this end, a call for a return to Foucault is issued via a critical engagement with the so-called practice-turn. This re-turn does not seek to reconstitute venerably Foucauldianism, but to theorize ‘inters-’ as vanishing points that challenge the integrity of discrete cultural orders in non-convergent manners. The propounded interdiscursivity approach is offered as a reading strategy that permeates the contemporary cultural consumption phenomena that are scrutinized in this book, against a pan-consumptivist framework. By drawing on qualitative and mixed methods research designs, facilitated by CAQDAS software, the empirical studies that are hosted here span a vivid array of topics that are directly relevant to both traditional and new media researchers, such as the consumption of ideologies in Web 2.0 social movements, the ability of micro-celebrities to act as cultural game-changers, the post-loyalty abjective consumption ethos. The theoretically novel approaches on offer are coupled with methodological innovations in areas such as user-generated content, artists’ branding, and experiential consumption
    • 

    corecore