5 research outputs found
Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization
We propose a new computationally efficient privacy-preserving identification
framework based on layered sparse coding. The key idea of the proposed
framework is a sparsifying transform learning with ambiguization, which
consists of a trained linear map, a component-wise nonlinearity and a privacy
amplification. We introduce a practical identification framework, which
consists of two phases: public and private identification. The public untrusted
server provides the fast search service based on the sparse privacy protected
codebook stored at its side. The private trusted server or the local client
application performs the refined accurate similarity search using the results
of the public search and the layered sparse codebooks stored at its side. The
private search is performed in the decoded domain and also the accuracy of
private search is chosen based on the authorization level of the client. The
efficiency of the proposed method is in computational complexity of encoding,
decoding, "encryption" (ambiguization) and "decryption" (purification) as well
as storage complexity of the codebooks.Comment: EUSIPCO 201
Aggregation and embedding for group membership verification
accepted at ICASSP 2019International audienceThis paper proposes a group membership verification protocol preventing the curious but honest server from reconstructing the enrolled signatures and inferring the identity of querying clients. The protocol quantizes the signatures into discrete embeddings, making reconstruction difficult. It also aggregates multiple embeddings into representative values, impeding identification. Theoretical and experimental results show the trade-off between the security and error rates
Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional Groups
We propose a practical framework to address the problem of privacy-aware
image sharing in large-scale setups. We argue that, while compactness is always
desired at scale, this need is more severe when trying to furthermore protect
the privacy-sensitive content. We therefore encode images, such that, from one
hand, representations are stored in the public domain without paying the huge
cost of privacy protection, but ambiguated and hence leaking no discernible
content from the images, unless a combinatorially-expensive guessing mechanism
is available for the attacker. From the other hand, authorized users are
provided with very compact keys that can easily be kept secure. This can be
used to disambiguate and reconstruct faithfully the corresponding
access-granted images. We achieve this with a convolutional autoencoder of our
design, where feature maps are passed independently through sparsifying
transformations, providing multiple compact codes, each responsible for
reconstructing different attributes of the image. The framework is tested on a
large-scale database of images with public implementation available.Comment: Accepted as an oral presentation for ICASSP 202