8,373 research outputs found
Preserving Differential Privacy in Convolutional Deep Belief Networks
The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions
Privacy-Preserving Public Information for Sequential Games
In settings with incomplete information, players can find it difficult to
coordinate to find states with good social welfare. For example, in financial
settings, if a collection of financial firms have limited information about
each other's strategies, some large number of them may choose the same
high-risk investment in hopes of high returns. While this might be acceptable
in some cases, the economy can be hurt badly if many firms make investments in
the same risky market segment and it fails. One reason why many firms might end
up choosing the same segment is that they do not have information about other
firms' investments (imperfect information may lead to `bad' game states).
Directly reporting all players' investments, however, raises confidentiality
concerns for both individuals and institutions.
In this paper, we explore whether information about the game-state can be
publicly announced in a manner that maintains the privacy of the actions of the
players, and still suffices to deter players from reaching bad game-states. We
show that in many games of interest, it is possible for players to avoid these
bad states with the help of privacy-preserving, publicly-announced information.
We model behavior of players in this imperfect information setting in two ways
-- greedy and undominated strategic behaviours, and we prove guarantees on
social welfare that certain kinds of privacy-preserving information can help
attain. Furthermore, we design a counter with improved privacy guarantees under
continual observation
EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity
Electronic information is increasingly often shared among entities without
complete mutual trust. To address related security and privacy issues, a few
cryptographic techniques have emerged that support privacy-preserving
information sharing and retrieval. One interesting open problem in this context
involves two parties that need to assess the similarity of their datasets, but
are reluctant to disclose their actual content. This paper presents an
efficient and provably-secure construction supporting the privacy-preserving
evaluation of sample set similarity, where similarity is measured as the
Jaccard index. We present two protocols: the first securely computes the
(Jaccard) similarity of two sets, and the second approximates it, using MinHash
techniques, with lower complexities. We show that our novel protocols are
attractive in many compelling applications, including document/multimedia
similarity, biometric authentication, and genetic tests. In the process, we
demonstrate that our constructions are appreciably more efficient than prior
work.Comment: A preliminary version of this paper was published in the Proceedings
of the 7th ESORICS International Workshop on Digital Privacy Management (DPM
2012). This is the full version, appearing in the Journal of Computer
Securit
- …