247 research outputs found

    SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

    Full text link
    Research into adversarial examples (AE) has developed rapidly, yet static adversarial patches are still the main technique for conducting attacks in the real world, despite being obvious, semi-permanent and unmodifiable once deployed. In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector. Attackers can project a specifically crafted adversarial perturbation onto a real-world object, transforming it into an AE. This allows the adversary greater control over the attack compared to adversarial patches: (i) projections can be dynamically turned on and off or modified at will, (ii) projections do not suffer from the locality constraint imposed by patches, making them harder to detect. We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks, focusing on the detection of stop signs. We conduct experiments in a variety of ambient light conditions, including outdoors, showing how in non-bright settings the proposed method generates AE that are extremely robust, causing misclassifications on state-of-the-art networks with up to 99% success rate for a variety of angles and distances. We also demostrate that SLAP-generated AE do not present detectable behaviours seen in adversarial patches and therefore bypass SentiNet, a physical AE detection method. We evaluate other defences including an adaptive defender using adversarial learning which is able to thwart the attack effectiveness up to 80% even in favourable attacker conditions.Comment: 13 pages, to be published in Usenix Security 2021, project page https://github.com/ssloxford/short-lived-adversarial-perturbation

    Augmented reality X-ray vision on optical see-through head mounted displays

    Get PDF
    Abstract. In this thesis, we present the development and evaluation of an augmented reality X-ray system on optical see-through head-mounted displays. Augmented reality X-ray vision allows users to see through solid surfaces such as walls and facades, by augmenting the real view with virtual images representing the hidden objects. Our system is developed based on the optical see-through mixed reality headset Microsoft Hololens. We have developed an X-ray cutout algorithm that uses the geometric data of the environment and enables seeing through surfaces. We have developed four different visualizations as well based on the algorithm. The first visualization renders simply the X-ray cutout without displaying any information about the occluding surface. The other three visualizations display features extracted from the occluder surface to help the user to get better depth perception of the virtual objects. We have used Sobel edge detection to extract the information. The three visualizations differ in the way to render the extracted features. A subjective experiment is conducted to test and evaluate the visualizations and to compare them with each other. The experiment consists of two parts; depth estimation task and a questionnaire. Both the experiment and its results are presented in the thesis

    Image Data Augmentation Approaches: A Comprehensive Survey and Future directions

    Full text link
    Deep learning (DL) algorithms have shown significant performance in various computer vision tasks. However, having limited labelled data lead to a network overfitting problem, where network performance is bad on unseen data as compared to training data. Consequently, it limits performance improvement. To cope with this problem, various techniques have been proposed such as dropout, normalization and advanced data augmentation. Among these, data augmentation, which aims to enlarge the dataset size by including sample diversity, has been a hot topic in recent times. In this article, we focus on advanced data augmentation techniques. we provide a background of data augmentation, a novel and comprehensive taxonomy of reviewed data augmentation techniques, and the strengths and weaknesses (wherever possible) of each technique. We also provide comprehensive results of the data augmentation effect on three popular computer vision tasks, such as image classification, object detection and semantic segmentation. For results reproducibility, we compiled available codes of all data augmentation techniques. Finally, we discuss the challenges and difficulties, and possible future direction for the research community. We believe, this survey provides several benefits i) readers will understand the data augmentation working mechanism to fix overfitting problems ii) results will save the searching time of the researcher for comparison purposes. iii) Codes of the mentioned data augmentation techniques are available at https://github.com/kmr2017/Advanced-Data-augmentation-codes iv) Future work will spark interest in research community.Comment: We need to make a lot changes to make its quality bette

    Holistic Multi-View Building Analysis in the Wild with Projection Pooling

    Get PDF
    We address six different classification tasks related to fine-grained building attributes: construction type, number of floors, pitch and geometry of the roof, facade material, and occupancy class. Tackling such a remote building analysis problem became possible only recently due to growing large-scale datasets of urban scenes. To this end, we introduce a new benchmarking dataset, consisting of 49426 images (top-view and street-view) of 9674 buildings. These photos are further assembled, together with the geometric metadata. The dataset showcases various real-world challenges, such as occlusions, blur, partially visible objects, and a broad spectrum of buildings. We propose a new projection pooling layer, creating a unified, top-view representation of the top-view and the side views in a high-dimensional space. It allows us to utilize the building and imagery metadata seamlessly. Introducing this layer improves classification accuracy -- compared to highly tuned baseline models -- indicating its suitability for building analysis.Comment: Accepted for publication at the 35th AAAI Conference on Artificial Intelligence (AAAI 2021

    CellMix: A General Instance Relationship based Method for Data Augmentation Towards Pathology Image Classification

    Full text link
    In pathology image analysis, obtaining and maintaining high-quality annotated samples is an extremely labor-intensive task. To overcome this challenge, mixing-based methods have emerged as effective alternatives to traditional preprocessing data augmentation techniques. Nonetheless, these methods fail to fully consider the unique features of pathology images, such as local specificity, global distribution, and inner/outer-sample instance relationships. To better comprehend these characteristics and create valuable pseudo samples, we propose the CellMix framework, which employs a novel distribution-oriented in-place shuffle approach. By dividing images into patches based on the granularity of pathology instances and shuffling them within the same batch, the absolute relationships between instances can be effectively preserved when generating new samples. Moreover, we develop a curriculum learning-inspired, loss-driven strategy to handle perturbations and distribution-related noise during training, enabling the model to adaptively fit the augmented data. Our experiments in pathology image classification tasks demonstrate state-of-the-art (SOTA) performance on 7 distinct datasets. This innovative instance relationship-centered method has the potential to inform general data augmentation approaches for pathology image classification. The associated codes are available at https://github.com/sagizty/CellMix

    What Goes beyond Multi-modal Fusion in One-stage Referring Expression Comprehension: An Empirical Study

    Full text link
    Most of the existing work in one-stage referring expression comprehension (REC) mainly focuses on multi-modal fusion and reasoning, while the influence of other factors in this task lacks in-depth exploration. To fill this gap, we conduct an empirical study in this paper. Concretely, we first build a very simple REC network called SimREC, and ablate 42 candidate designs/settings, which covers the entire process of one-stage REC from network design to model training. Afterwards, we conduct over 100 experimental trials on three benchmark datasets of REC. The extensive experimental results not only show the key factors that affect REC performance in addition to multi-modal fusion, e.g., multi-scale features and data augmentation, but also yield some findings that run counter to conventional understanding. For example, as a vision and language (V&L) task, REC does is less impacted by language prior. In addition, with a proper combination of these findings, we can improve the performance of SimREC by a large margin, e.g., +27.12% on RefCOCO+, which outperforms all existing REC methods. But the most encouraging finding is that with much less training overhead and parameters, SimREC can still achieve better performance than a set of large-scale pre-trained models, e.g., UNITER and VILLA, portraying the special role of REC in existing V&L research
    corecore