124 research outputs found
Approximate Thumbnail Preserving Encryption
Thumbnail preserving encryption (TPE) was suggested by Wright et al. as a way to balance privacy and usability for online image sharing. The idea is to encrypt a plaintext image into a ciphertext image that has roughly the same thumbnail as well as retaining the original image format. At the same time, TPE allows users to take advantage of much of the functionality of online photo management tools, while still providing some level of privacy against the service provider.
In this work we present three new approximate TPE encryption schemes. In our schemes, ciphertexts and plaintexts have perceptually similar, but not identical, thumbnails. Our constructions are the first TPE schemes designed to work well with JPEG compression. In addition, we show that they also have provable security guarantees that characterize precisely what information about the plaintext is leaked by the ciphertext image.
We empirically evaluate our schemes according to the similarity of plaintext and ciphertext thumbnails, increase in file size under JPEG compression, preservation of perceptual image hashes, among other aspects. We also show how approximate TPE can be an effective tool to thwart inference attacks by machine-learning image classifiers, which have shown to be effective against other image obfuscation techniques
Balancing Image Privacy and Usability with Thumbnail-Preserving Encryption
In this paper, we motivate the need for image encryption techniques that preserve certain visual features in images and hide all other information, to balance privacy and usability in the context of cloud-based image storage services. In particular, we introduce the concept of ideal or exact Thumbnail-Preserving Encryption (TPE), a special case of format-preserving encryption, and present a concrete construction. In TPE, a ciphertext is itself an image that has the same thumbnail as the plaintext (unencrypted) image, but that provably leaks nothing about the plaintext beyond its thumbnail. We provide a formal security analysis for the construction, and a prototype implementation to demonstrate compatibility with existing services. We also study the ability of users to distinguish between thumbnail images preserved by TPE. Our findings indicate that TPE is an efficient and promising approach to balance usability and privacy concerns for images. Our code and a demo are available at http://photoencryption.org
Visual Content Privacy Protection: A Survey
Vision is the most important sense for people, and it is also one of the main
ways of cognition. As a result, people tend to utilize visual content to
capture and share their life experiences, which greatly facilitates the
transfer of information. Meanwhile, it also increases the risk of privacy
violations, e.g., an image or video can reveal different kinds of
privacy-sensitive information. Researchers have been working continuously to
develop targeted privacy protection solutions, and there are several surveys to
summarize them from certain perspectives. However, these surveys are either
problem-driven, scenario-specific, or technology-specific, making it difficult
for them to summarize the existing solutions in a macroscopic way. In this
survey, a framework that encompasses various concerns and solutions for visual
privacy is proposed, which allows for a macro understanding of privacy concerns
from a comprehensive level. It is based on the fact that privacy concerns have
corresponding adversaries, and divides privacy protection into three
categories, based on computer vision (CV) adversary, based on human vision (HV)
adversary, and based on CV \& HV adversary. For each category, we analyze the
characteristics of the main approaches to privacy protection, and then
systematically review representative solutions. Open challenges and future
directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure
Recommended from our members
Ideal Thumbnail-Preserving Encryption for Balancing Image Privacy and Usability
In this dissertation, we propose Ideal Thumbnail-Preserving Encryption (Ideal TPE), as a special case of format-preserving encryption, to balance image privacy and usability concerns in a cloud environment. We first introduce a concrete construction for Ideal TPE, that provably leaks nothing about the plaintext (unencrypted) image beyond its thumbnail. We then furnish a formal security analysis for the construction that yields asymptotic security. To demonstrate compatibility with existing photo storage services, we provide a prototype implementation. Furthermore, we study the usability impact of TPE encrypted images through a user study. We show that the ability of image owners to interact with TPE encrypted image thumbnails is not significantly reduced compared to the interactions with high-resolution images.
Finally, we take into account the threat of low-resolution face-recognition against TPE, and propose adding a reversible face sanitization pre-processing step. We argue that this face sanitization approach can thwart low-resolution face recognition in a systematic way without compromising reversibility. We qualitatively show that sanitized TPE image thumbnails look visually similar to those of unsanitized TPE images, and hence are expected to offer similar usability. Our findings indicate that TPE and its enhanced version with face sanitization are promising approaches for balancing usability and privacy concerns for image storage in the cloud
A review on visual privacy preservation techniques for active and assisted living
This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 861091. The authors would also like to acknowledge the contribution of COST Action CA19121 - GoodBrother, Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (https://goodbrother.eu/), supported by COST (European Cooperation in Science and Technology) (https://www.cost.eu/)
Specification and implementation of metadata for secure image provenance information
The booming of AI tools capable of modifying images has equipped fake media producers with strong tools in their arsenal. Complementary to the efforts of implementing fake media detectors, research organizations are designing a standardized way of describing the modification history of digital media in a cryptographically secure way, ensuring that this information cannot be tampered with. This thesis proposes a specification which focuses on JPEG images and specifies a data model based on the JPEG Universal Metadata Box Format (JUMBF) standard. Furthermore, it proposes the encryption of a subset of provenance metadata that could pose privacy-related risks to the users. Along with the specification, a library has been developed to manage provenance information of JPEG images. To that extent, a set of libraries that handle JUMBF information is required to be implemented. These libraries have been submitted as a proposed reference software contributing to the JUMBF standard
Recommended from our members
Reducing Third Parties in the Network through Client-Side Intelligence
The end-to-end argument describes the communication between a client and server using functionality that is located at the end points of a distributed system. From a security and privacy perspective, clients only need to trust the server they are trying to reach instead of intermediate system nodes and other third-party entities. Clients accessing the Internet today and more specifically the World Wide Web have to interact with a plethora of network entities for name resolution, traffic routing and content delivery. While individual communications with those entities may some times be end to end, from the user's perspective they are intermediaries the user has to trust in order to access the website behind a domain name. This complex interaction lacks transparency and control and expands the attack surface beyond the server clients are trying to reach directly. In this dissertation, we develop a set of novel design principles and architectures to reduce the number of third-party services and networks a client's traffic is exposed to when browsing the web. Our proposals bring additional intelligence to the client and can be adopted without changes to the third parties.
Websites can include content, such as images and iframes, located on third-party servers. Browsers loading an HTML page will contact these additional servers to satisfy external content dependencies. Such interaction has privacy implications because it includes context related to the user's browsing history. For example, the widespread adoption of "social plugins" enables the respective social networking services to track a growing part of its members' online activity. These plugins are commonly implemented as HTML iframes originating from the domain of the respective social network. They are embedded in sites users might visit, for instance to read the news or do shopping. Facebook's Like button is an example of a social plugin. While one could prevent the browser from connecting to third-party servers, it would break existing functionality and thus be unlikely to be widely adopted. We propose a novel design for privacy-preserving social plugins that decouples the retrieval of user-specific content from the loading of third-party content. Our approach can be adopted by web browsers without the need for server-side changes. Our design has the benefit of avoiding the transmission of user-identifying information to the third-party server while preserving the original functionality of the plugins.
In addition, we propose an architecture which reduces the networks involved when routing traffic to a website. Users then have to trust fewer organizations with their traffic. Such trust is necessary today because for example we observe that only 30% of popular web servers offer HTTPS. At the same time there is evidence that network adversaries carry out active and passive attacks against users. We argue that if end-to-end security with a server is not available the next best thing is a secure link to a network that is close to the server and will act as a gateway. Our approach identifies network vantage points in the cloud, enables a client to establish secure tunnels to them and intelligently routes traffic based on its destination. The proliferation of infrastructure-as-a-service platforms makes it practical for users to benefit from the cloud. We determine that our architecture is practical because our proposed use of the cloud aligns with existing ways end-user devices leverage it today. Users control both endpoints of the tunnel and do not depend on the cooperation of individual websites. We are thus able to eliminate third-party networks for 20% of popular web servers, reduce network paths to 1 hop for an additional 20% and shorten the rest.
We hypothesize that user privacy on the web can be improved in terms of transparency and control by reducing the systems and services that are indirectly and automatically involved. We also hypothesize that such reduction can be achieved unilaterally through client-side initiatives and without affecting the operation of individual websites
Recommended from our members
Making Data Storage Efficient in the Era of Cloud Computing
We enter the era of cloud computing in the last decade, as many paradigm shifts are happening on how people write and deploy applications. Despite the advancement of cloud computing, data storage abstractions have not evolved much, causing inefficiencies in performance, cost, and security.
This dissertation proposes a novel approach to make data storage efficient in the era of cloud computing by building new storage abstractions and systems that bridge the gap between cloud computing and data storage and simplify development. We build four systems to address four data inefficiencies in cloud computing.
The first system, Grandet, solves the data storage inefficiency caused by the paradigm shift from upfront provisioning to a variety of pay-as-you-go cloud services. Grandet is an extensible storage system that significantly reduces storage costs for web applications deployed in the cloud. Under the hood, it supports multiple heterogeneous stores and unifies them by placing each data object at the store deemed most economical. Our results show that Grandet reduces their costs by an average of 42.4%, and it is fast, scalable, and easy to use.
The second system, Unic, solves the data inefficiency caused by the paradigm shift from single-tenancy to multi-tenancy. Unic securely deduplicates general computations. It exports a cache service that allows cloud applications running on behalf of mutually distrusting users to memoize and reuse computation results, thereby improving performance. Unic achieves both integrity and secrecy through a novel use of code attestation, and it provides a simple yet expressive API that enables applications to deduplicate their own rich computations. Our results show that Unic is easy to use, speeds up applications by an average of 7.58x, and with little storage overhead.
The third system, Lambdata, solves the data inefficiency caused by the paradigm shift to serverless computing, where developers only write core business logic, and cloud service providers maintain all the infrastructure. Lambdata is a novel serverless computing system that enables developers to declare a cloud function's data intents, including both data read and data written. Once data intents are made explicit, Lambdata performs a variety of optimizations to improve speed, including caching data locally and scheduling functions based on code and data locality. Our results show that Lambdata achieves an average speedup of 1.51x on the turnaround time of practical workloads and reduces monetary cost by 16.5%.
The fourth system, CleanOS, solves the data inefficiency caused by the paradigm shift from desktop computers to smartphones always connected to the cloud. CleanOS is a new Android-based operating system that manages sensitive data rigorously and maintains a clean environment at all times. It identifies and tracks sensitive data, encrypts it with a key, and evicts that key to the cloud when the data is not in active use on the device. Our results show that CleanOS limits sensitive-data exposure drastically while incurring acceptable overheads on mobile networks
Recommended from our members
Two Sides of a Coin: Adversarial-Based Image Privacy and Defending Against Adversarial Perturbations for Robust CNNs
Emergence of highly accurate Convolutional Neural Networks (CNNs) with the capability to process large datasets, has led to their popularity in many applications, including safety/security-sensitive (e.g. disease recognition, self-driving cars). Despite the high accuracy of convolutional neural networks, they have been found to be susceptible to adversarial noise added to benign examples and out-distribution samples that are classified confidently into in-distribution classes. The applications of CNNs in surveillance services necessitate the need for secure and robust CNNs. On the other hand, despite the benefits of CNNs to surveillance applications, they pose a privacy threat as they are able to undertake image face recognition on a large scale. Coupled with the availability of large image datasets on online social networks and at image storage providers, this poses a serious privacy threat. Emergence of Super Resolution Convolutional Neural Networks (SRCNNs) which improve the image resolution for face recognition classifiers further exacerbates this threat. In this dissertation, we address both these problems. We first propose taking advantage of CNNs vulnerability to adversarial perturbations by adding adversarial noise to images to fool CNNs to protect privacy of images in cloud image storage setting. We propose and evaluate two adversarial-based protection methods: (i) a semantic perturbation-based method called, k-Randomized Transparent Image Overlays (k-RTIO), and (ii) a learning-based method called, Universal Ensemble Perturbation (UEP). These methods can thwart unknown face recognition models (i.e. black-box) while requiring low computational resources. We then evaluate the practicality of adversarial perturbations learned for CNNs on SRCNNs and show that adversarial perturbations are transparent to SRCNNs. In the last part of our dissertation, We propose mechanisms to make CNNs robust against adversarial and out-distribution examples by rejecting suspicious inputs. In particular, we propose an Augmented CNN (A-CNN) with an extra class that is trained on limited out-distribution samples, which can improve CNNs resiliency against adversarial examples. Further, to protect pre-trained highly accurate CNNs, post-processing methods that analyze the output of intermediate layers of CNNs for distinguishing in- and out-distribution have attracted attention. we propose using adversarial profiles, perturbations that misclassify samples of a source class (not other classes) to a target class, as a post-processing step to detect out-distribution examples
- …