295 research outputs found

    SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

    Full text link
    The popularity of various social platforms has prompted more people to share their routine photos online. However, undesirable privacy leakages occur due to such online photo sharing behaviors. Advanced deep neural network (DNN) based object detectors can easily steal users' personal information exposed in shared photos. In this paper, we propose a novel adversarial example based privacy-preserving technique for social images against object detectors based privacy stealing. Specifically, we develop an Object Disappearance Algorithm to craft two kinds of adversarial social images. One can hide all objects in the social images from being detected by an object detector, and the other can make the customized sensitive objects be incorrectly classified by the object detector. The Object Disappearance Algorithm constructs perturbation on a clean social image. After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded. We use two metrics, privacy-preserving success rate and privacy leakage rate, to evaluate the effectiveness of the proposed method. Experimental results show that, the proposed method can effectively protect the privacy of social images. The privacy-preserving success rates of the proposed method on MS-COCO and PASCAL VOC 2007 datasets are high up to 96.1% and 99.3%, respectively, and the privacy leakage rates on these two datasets are as low as 0.57% and 0.07%, respectively. In addition, compared with existing image processing methods (low brightness, noise, blur, mosaic and JPEG compression), the proposed method can achieve much better performance in privacy protection and image visual quality maintenance

    GAN-Based Differential Private Image Privacy Protection Framework for the Internet of Multimedia Things.

    Full text link
    With the development of the Internet of Multimedia Things (IoMT), an increasing amount of image data is collected by various multimedia devices, such as smartphones, cameras, and drones. This massive number of images are widely used in each field of IoMT, which presents substantial challenges for privacy preservation. In this paper, we propose a new image privacy protection framework in an effort to protect the sensitive personal information contained in images collected by IoMT devices. We aim to use deep neural network techniques to identify the privacy-sensitive content in images, and then protect it with the synthetic content generated by generative adversarial networks (GANs) with differential privacy (DP). Our experiment results show that the proposed framework can effectively protect users' privacy while maintaining image utility

    DP-Image: Differential Privacy for Image Data in Feature Space

    Full text link
    The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public. Even though differential privacy (DP) is a widely accepted criterion that can provide a provable privacy guarantee, the application of DP on unstructured data such as images is not trivial due to the lack of a clear qualification on the meaningful difference between any two images. In this paper, for the first time, we introduce a novel notion of image-aware differential privacy, referred to as DP-image, that can protect user's personal information in images, from both human and AI adversaries. The DP-Image definition is formulated as an extended version of traditional differential privacy, considering the distance measurements between feature space vectors of images. Then we propose a mechanism to achieve DP-Image by adding noise to an image feature vector. Finally, we conduct experiments with a case study on face image privacy. Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces

    Visual Content Privacy Protection: A Survey

    Full text link
    Vision is the most important sense for people, and it is also one of the main ways of cognition. As a result, people tend to utilize visual content to capture and share their life experiences, which greatly facilitates the transfer of information. Meanwhile, it also increases the risk of privacy violations, e.g., an image or video can reveal different kinds of privacy-sensitive information. Researchers have been working continuously to develop targeted privacy protection solutions, and there are several surveys to summarize them from certain perspectives. However, these surveys are either problem-driven, scenario-specific, or technology-specific, making it difficult for them to summarize the existing solutions in a macroscopic way. In this survey, a framework that encompasses various concerns and solutions for visual privacy is proposed, which allows for a macro understanding of privacy concerns from a comprehensive level. It is based on the fact that privacy concerns have corresponding adversaries, and divides privacy protection into three categories, based on computer vision (CV) adversary, based on human vision (HV) adversary, and based on CV \& HV adversary. For each category, we analyze the characteristics of the main approaches to privacy protection, and then systematically review representative solutions. Open challenges and future directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure

    Unmanned Vehicle Systems & Operations on Air, Sea, Land

    Get PDF
    Unmanned Vehicle Systems & Operations On Air, Sea, Land is our fourth textbook in a series covering the world of Unmanned Aircraft Systems (UAS) and Counter Unmanned Aircraft Systems (CUAS). (Nichols R. K., 2018) (Nichols R. K., et al., 2019) (Nichols R. , et al., 2020)The authors have expanded their purview beyond UAS / CUAS systems. Our title shows our concern for growth and unique cyber security unmanned vehicle technology and operations for unmanned vehicles in all theaters: Air, Sea and Land – especially maritime cybersecurity and China proliferation issues. Topics include: Information Advances, Remote ID, and Extreme Persistence ISR; Unmanned Aerial Vehicles & How They Can Augment Mesonet Weather Tower Data Collection; Tour de Drones for the Discerning Palate; Underwater Autonomous Navigation & other UUV Advances; Autonomous Maritime Asymmetric Systems; UUV Integrated Autonomous Missions & Drone Management; Principles of Naval Architecture Applied to UUV’s; Unmanned Logistics Operating Safely and Efficiently Across Multiple Domains; Chinese Advances in Stealth UAV Penetration Path Planning in Combat Environment; UAS, the Fourth Amendment and Privacy; UV & Disinformation / Misinformation Channels; Chinese UAS Proliferation along New Silk Road Sea / Land Routes; Automaton, AI, Law, Ethics, Crossing the Machine – Human Barrier and Maritime Cybersecurity.Unmanned Vehicle Systems are an integral part of the US national critical infrastructure The authors have endeavored to bring a breadth and quality of information to the reader that is unparalleled in the unclassified sphere. Unmanned Vehicle (UV) Systems & Operations On Air, Sea, Land discusses state-of-the-art technology / issues facing U.S. UV system researchers / designers / manufacturers / testers. We trust our newest look at Unmanned Vehicles in Air, Sea, and Land will enrich our students and readers understanding of the purview of this wonderful technology we call UV.https://newprairiepress.org/ebooks/1035/thumbnail.jp

    When Machine Learning Meets Privacy

    Full text link
    The newly emerged machine learning (e.g., deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance systems. Meanwhile, privacy has emerged as a big concern in this machine learning-based artificial intelligence era. It is important to note that the problem of privacy preservation in the context of machine learning is quite different from that in traditional data privacy protection, as machine learning can act as both friend and foe. Currently, the work on the preservation of privacy and machine learning are still in an infancy stage, as most existing solutions only focus on privacy problems during the machine learning process. Therefore, a comprehensive study on the privacy preservation problems and machine learning is required. This article surveys the state of the art in privacy issues and solutions for machine learning. The survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning-aided privacy protection, and (iii) machine learning-based privacy attack and corresponding protection schemes. The current research progress in each category is reviewed and the key challenges are identified. Finally, based on our in-depth analysis of the area of privacy and machine learning, we point out future research directions in this field.</jats:p

    Towards Privacy and Security Concerns of Adversarial Examples in Deep Hashing Image Retrieval

    Get PDF
    With the explosive growth of images on the internet, image retrieval based on deep hashing attracts spotlights from both research and industry communities. Empowered by deep neural networks (DNNs), deep hashing enables fast and accurate image retrieval on large-scale data. However, inheriting from deep learning, deep hashing remains vulnerable to specifically designed input, called adversarial examples. By adding imperceptible perturbations on inputs, adversarial examples fool DNNs to make wrong decisions. The existence of adversarial examples not only raises security concerns for real-world deep learning applications, but also provides us with a technique to confront malicious applications. In this dissertation, we investigate privacy and security concerns in deep hashing image retrieval systems related to adversarial examples. Starting with a privacy concern, we stand on users side to preserve privacy information in images, which can be extracted by adversaries by retrieving similar images in image retrieval systems. Existing image processing-based privacy-preserving methods suffer from a trade-off of efficacy and usability. We propose a method introducing imperceptible adversarial perturbations on original images to prevent them from being retrieved. Users upload protected adversarial images instead of the original images to preserve privacy while maintaining usability. Then we shift to the security concerns. We act as attackers, proactively providing adversarial images to retrieval systems. These adversarial examples are embedded to specific targets so that the user retrieval results contain our unrelated adversarial images, e.g., users query with a “Husky dog” image, but retrieve adversarial “dog food” images in the result. A transferability-based attack is proposed for black-box models. We improve black-box transferability with the random noise as the proxy in optimization, achieving state-of-the-art success rate. Finally, we stand on retrieval systems side to mitigate the security concerns of adversarial attacks in deep hashing image retrieval. We propose a detection method that detects adversarial examples in the inference time. By studying unique adversarial behaviors in deep hashing image retrieval, our proposed method is constructed on criterions of these adversarial behaviors. The proposed method detects most of the adversarial examples with minimum overhead
    • …
    corecore