290 research outputs found
A Flexible Privacy-preserving Framework for Singular Value Decomposition under Internet of Things Environment
The singular value decomposition (SVD) is a widely used matrix factorization
tool which underlies plenty of useful applications, e.g. recommendation system,
abnormal detection and data compression. Under the environment of emerging
Internet of Things (IoT), there would be an increasing demand for data analysis
to better human's lives and create new economic growth points. Moreover, due to
the large scope of IoT, most of the data analysis work should be done in the
network edge, i.e. handled by fog computing. However, the devices which provide
fog computing may not be trustable while the data privacy is often the
significant concern of the IoT application users. Thus, when performing SVD for
data analysis purpose, the privacy of user data should be preserved. Based on
the above reasons, in this paper, we propose a privacy-preserving fog computing
framework for SVD computation. The security and performance analysis shows the
practicability of the proposed framework. Furthermore, since different
applications may utilize the result of SVD operation in different ways, three
applications with different objectives are introduced to show how the framework
could flexibly achieve the purposes of different applications, which indicates
the flexibility of the design.Comment: 24 pages, 4 figure
Secure biometric authentication with improved accuracy
We propose a new hybrid protocol for cryptographically secure biometric authentication. The main advantages of the proposed protocol over previous solutions can be summarised as follows: (1) potential for much better accuracy using different types of biometric signals, including behavioural ones; and (2) improved user privacy, since user identities are not transmitted at any point in the protocol execution. The new protocol takes advantage of state-of-the-art identification classifiers, which provide not only better accuracy, but also the possibility to perform authentication without knowing who the user claims to be. Cryptographic security is based on the Paillier public key encryption scheme
Flexible and Robust Privacy-Preserving Implicit Authentication
Implicit authentication consists of a server authenticating a user based on
the user's usage profile, instead of/in addition to relying on something the
user explicitly knows (passwords, private keys, etc.). While implicit
authentication makes identity theft by third parties more difficult, it
requires the server to learn and store the user's usage profile. Recently, the
first privacy-preserving implicit authentication system was presented, in which
the server does not learn the user's profile. It uses an ad hoc two-party
computation protocol to compare the user's fresh sampled features against an
encrypted stored user's profile. The protocol requires storing the usage
profile and comparing against it using two different cryptosystems, one of them
order-preserving; furthermore, features must be numerical. We present here a
simpler protocol based on set intersection that has the advantages of: i)
requiring only one cryptosystem; ii) not leaking the relative order of fresh
feature samples; iii) being able to deal with any type of features (numerical
or non-numerical).
Keywords: Privacy-preserving implicit authentication, privacy-preserving set
intersection, implicit authentication, active authentication, transparent
authentication, risk mitigation, data brokers.Comment: IFIP SEC 2015-Intl. Information Security and Privacy Conference, May
26-28, 2015, IFIP AICT, Springer, to appea
Privacy-Preserving Trust Management Mechanisms from Private Matching Schemes
Cryptographic primitives are essential for constructing privacy-preserving
communication mechanisms. There are situations in which two parties that do not
know each other need to exchange sensitive information on the Internet. Trust
management mechanisms make use of digital credentials and certificates in order
to establish trust among these strangers. We address the problem of choosing
which credentials are exchanged. During this process, each party should learn
no information about the preferences of the other party other than strictly
required for trust establishment. We present a method to reach an agreement on
the credentials to be exchanged that preserves the privacy of the parties. Our
method is based on secure two-party computation protocols for set intersection.
Namely, it is constructed from private matching schemes.Comment: The material in this paper will be presented in part at the 8th DPM
International Workshop on Data Privacy Management (DPM 2013
Confidential Boosting with Random Linear Classifiers for Outsourced User-generated Data
User-generated data is crucial to predictive modeling in many applications.
With a web/mobile/wearable interface, a data owner can continuously record data
generated by distributed users and build various predictive models from the
data to improve their operations, services, and revenue. Due to the large size
and evolving nature of users data, data owners may rely on public cloud service
providers (Cloud) for storage and computation scalability. Exposing sensitive
user-generated data and advanced analytic models to Cloud raises privacy
concerns. We present a confidential learning framework, SecureBoost, for data
owners that want to learn predictive models from aggregated user-generated data
but offload the storage and computational burden to Cloud without having to
worry about protecting the sensitive data. SecureBoost allows users to submit
encrypted or randomly masked data to designated Cloud directly. Our framework
utilizes random linear classifiers (RLCs) as the base classifiers in the
boosting framework to dramatically simplify the design of the proposed
confidential boosting protocols, yet still preserve the model quality. A
Cryptographic Service Provider (CSP) is used to assist the Cloud's processing,
reducing the complexity of the protocol constructions. We present two
constructions of SecureBoost: HE+GC and SecSh+GC, using combinations of
homomorphic encryption, garbled circuits, and random masking to achieve both
security and efficiency. For a boosted model, Cloud learns only the RLCs and
the CSP learns only the weights of the RLCs. Finally, the data owner collects
the two parts to get the complete model. We conduct extensive experiments to
understand the quality of the RLC-based boosting and the cost distribution of
the constructions. Our results show that SecureBoost can efficiently learn
high-quality boosting models from protected user-generated data
Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries
We propose an efficient framework for enabling secure multi-party numerical
computations in a Peer-to-Peer network. This problem arises in a range of
applications such as collaborative filtering, distributed computation of trust
and reputation, monitoring and other tasks, where the computing nodes is
expected to preserve the privacy of their inputs while performing a joint
computation of a certain function. Although there is a rich literature in the
field of distributed systems security concerning secure multi-party
computation, in practice it is hard to deploy those methods in very large scale
Peer-to-Peer networks. In this work, we try to bridge the gap between
theoretical algorithms in the security domain, and a practical Peer-to-Peer
deployment.
We consider two security models. The first is the semi-honest model where
peers correctly follow the protocol, but try to reveal private information. We
provide three possible schemes for secure multi-party numerical computation for
this model and identify a single light-weight scheme which outperforms the
others. Using extensive simulation results over real Internet topologies, we
demonstrate that our scheme is scalable to very large networks, with up to
millions of nodes. The second model we consider is the malicious peers model,
where peers can behave arbitrarily, deliberately trying to affect the results
of the computation as well as compromising the privacy of other peers. For this
model we provide a fourth scheme to defend the execution of the computation
against the malicious peers. The proposed scheme has a higher complexity
relative to the semi-honest model. Overall, we provide the Peer-to-Peer network
designer a set of tools to choose from, based on the desired level of security.Comment: Submitted to Peer-to-Peer Networking and Applications Journal (PPNA)
200
Optimal non-perfect uniform secret sharing schemes
A secret sharing scheme is non-perfect if some subsets of participants that cannot recover the secret value have partial information about it. The information ratio of a secret sharing scheme is the ratio between the maximum length of the shares and the length of the secret. This work is dedicated to the search of bounds on the information ratio of non-perfect secret sharing schemes. To this end, we extend the known connections between polymatroids and perfect secret sharing schemes to the non-perfect case. In order to study non-perfect secret sharing schemes in all generality, we describe their structure through their access function, a real function that measures the amount of information that every subset of participants obtains about the secret value. We prove that there exists a secret sharing scheme for every access function. Uniform access functions, that is, the ones whose values depend only on the number of participants, generalize the threshold access structures. Our main result is to determine the optimal information ratio of the uniform access functions. Moreover, we present a construction of linear secret sharing schemes with optimal information ratio for the rational uniform access functions.Peer ReviewedPostprint (author's final draft
Security and Efficiency Analysis of the Hamming Distance Computation Protocol Based on Oblivious Transfer
open access articleBringer et al. proposed two cryptographic protocols for the computation of Hamming distance. Their first scheme uses Oblivious Transfer and provides security in the semi-honest model. The other scheme uses Committed Oblivious Transfer and is claimed to provide full security in the malicious case. The proposed protocols have direct implications to biometric authentication schemes between a prover and a verifier where the verifier has biometric data of the users in plain form.
In this paper, we show that their protocol is not actually fully secure against malicious adversaries. More precisely, our attack breaks the soundness property of their protocol where a malicious user can compute a Hamming distance which is different from the actual value. For biometric authentication systems, this attack allows a malicious adversary to pass the authentication without knowledge of the honest user's input with at most complexity instead of , where is the input length. We propose an enhanced version of their protocol where this attack is eliminated. The security of our modified protocol is proven using the simulation-based paradigm. Furthermore, as for efficiency concerns, the modified protocol utilizes Verifiable Oblivious Transfer which does not require the commitments to outputs which improves its efficiency significantly
Secure multiparty PageRank algorithm for collaborative fraud detection
Collaboration between financial institutions helps to improve detection of fraud. However, exchange of relevant data between these institutions is often not possible due to privacy constraints and data confidentiality. An important example of relevant data for fraud detection is given by a transaction graph, where the nodes represent bank accounts and the links consist of the transactions between these accounts. Previous works show that features derived from such graphs, like PageRank, can be used to improve fraud detection. However, each institution can only see a part of the whole transaction graph, corresponding to the accounts of its own customers. In this research a new method is described, making use of secure multiparty computation (MPC) techniques, allowing multiple parties to jointly compute the PageRank values of their combined transaction graphs securely, while guaranteeing that each party only learns the PageRank values of its own accounts and nothing about the other transaction graphs. In our experiments this method is applied to graphs containing up to tens of thousands of nodes. The execution time scales linearly with the number of nodes, and the method is highly parallelizable. Secure multiparty PageRank is feasible in a realistic setting with millions of nodes per party by extrapolating the results from our experiments
Towards Modeling Privacy in WiFi Fingerprinting Indoor Localization and its Application
In this paper, we study privacy models for privacy-preserving WiFi fingerprint based indoor local- ization (PPIL) schemes. We show that many existing models are insufficient and make unrealistic assumptions regarding adversaries’ power. To cover the state-of-the-art practical attacks, we propose the first formal security model which formulates the security goals of both client-side and server-side privacy beyond the curious-but-honest setting. In particular, our model considers various malicious behaviors such as exposing secrets of principles, choosing malicious WiFi fingerprints in location queries, and specifying the location area of a target client. Furthermore, we formulate the client-side privacy in an indistinguishability manner where an adversary is required to distinguish a client’s real location from a random one. The server-side privacy requires that adversaries cannot generate a fab- ricate database which provides a similar function to the real database of the server. In particular, we formally define the similarity between databases with a ball approach that has not been formalized before. We show the validity and applicability of our model by applying it to analyze the security of an existing PPIL protocol. We also design experiments to test the server-privacy in the presence of database leakage, based on a candidate server-privacy attack.Peer reviewe
- …