4 research outputs found

    Reconstruction of Privacy-Sensitive Data from Protected Templates

    Full text link
    In this paper, we address the problem of data reconstruction from privacy-protected templates, based on recent concept of sparse ternary coding with ambiguization (STCA). The STCA is a generalization of randomization techniques which includes random projections, lossy quantization, and addition of ambiguization noise to satisfy the privacy-utility trade-off requirements. The theoretical privacy-preserving properties of STCA have been validated on synthetic data. However, the applicability of STCA to real data and potential threats linked to reconstruction based on recent deep reconstruction algorithms are still open problems. Our results demonstrate that STCA still achieves the claimed theoretical performance when facing deep reconstruction attacks for the synthetic i.i.d. data, while for real images special measures are required to guarantee proper protection of the templates.Comment: accepted at ICIP 201

    Single-Component Privacy Guarantees in Helper Data Systems and Sparse Coding with Ambiguation

    Full text link
    We investigate the privacy of two approaches to (biometric) template protection: Helper Data Systems and Sparse Ternary Coding with Ambiguization. In particular, we focus on a privacy property that is often overlooked, namely how much leakage exists about one specific binary property of one component of the feature vector. This property is e.g. the sign or an indicator that a threshold is exceeded. We provide evidence that both approaches are able to protect such sensitive binary variables, and discuss how system parameters need to be set

    Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional Groups

    Full text link
    We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups. We argue that, while compactness is always desired at scale, this need is more severe when trying to furthermore protect the privacy-sensitive content. We therefore encode images, such that, from one hand, representations are stored in the public domain without paying the huge cost of privacy protection, but ambiguated and hence leaking no discernible content from the images, unless a combinatorially-expensive guessing mechanism is available for the attacker. From the other hand, authorized users are provided with very compact keys that can easily be kept secure. This can be used to disambiguate and reconstruct faithfully the corresponding access-granted images. We achieve this with a convolutional autoencoder of our design, where feature maps are passed independently through sparsifying transformations, providing multiple compact codes, each responsible for reconstructing different attributes of the image. The framework is tested on a large-scale database of images with public implementation available.Comment: Accepted as an oral presentation for ICASSP 202

    “It’s Shocking!": Analysing the Impact and Reactions to the A3: Android Apps Behaviour Analyser

    Get PDF
    The lack of privacy awareness in smartphone ecosystems prevents users from being able to compare apps in terms of privacy and from making informed privacy decisions. In this paper we analysed smartphone users' privacy perceptions and concerns based on a novel privacy enhancing tool called Android Apps Behaviour Analyser (A3). The A3 tool enables user to behaviourally analyse the privacy aspects of their installed apps and notifies about potential privacy invasive activities. To examine the capabilities of A3 we designed a user study. We captured and contrasted privacy concern and perception of 52 participants, before and after using our tool. The results showed that A3 enables users to easily detect their smartphone app's privacy violation activities. Further, we found that there is a significant difference between users' privacy concern and expectation before and after using A3 and the majority of them were surprised to learn how often their installed apps access personal resources. Overall, we observed that the A3 tool was capable the influence the participants' attitude towards protecting their privacy
    corecore