10,493 research outputs found
Privacy-Aware Guessing Efficiency
We investigate the problem of guessing a discrete random variable under a
privacy constraint dictated by another correlated discrete random variable ,
where both guessing efficiency and privacy are assessed in terms of the
probability of correct guessing. We define as the maximum
probability of correctly guessing given an auxiliary random variable ,
where the maximization is taken over all ensuring that the
probability of correctly guessing given does not exceed . We
show that the map is strictly increasing,
concave, and piecewise linear, which allows us to derive a closed form
expression for when and are connected via a
binary-input binary-output channel. For being pairs of independent
and identically distributed binary random vectors, we similarly define
under the assumption that is also
a binary vector. Then we obtain a closed form expression for
for sufficiently large, but nontrivial
values of .Comment: ISIT 201
Context-Aware Generative Adversarial Privacy
Preserving the utility of published datasets while simultaneously providing
provable privacy guarantees is a well-known challenge. On the one hand,
context-free privacy solutions, such as differential privacy, provide strong
privacy guarantees, but often lead to a significant reduction in utility. On
the other hand, context-aware privacy solutions, such as information theoretic
privacy, achieve an improved privacy-utility tradeoff, but assume that the data
holder has access to dataset statistics. We circumvent these limitations by
introducing a novel context-aware privacy framework called generative
adversarial privacy (GAP). GAP leverages recent advancements in generative
adversarial networks (GANs) to allow the data holder to learn privatization
schemes from the dataset itself. Under GAP, learning the privacy mechanism is
formulated as a constrained minimax game between two players: a privatizer that
sanitizes the dataset in a way that limits the risk of inference attacks on the
individuals' private variables, and an adversary that tries to infer the
private variables from the sanitized dataset. To evaluate GAP's performance, we
investigate two simple (yet canonical) statistical dataset models: (a) the
binary data model, and (b) the binary Gaussian mixture model. For both models,
we derive game-theoretically optimal minimax privacy mechanisms, and show that
the privacy mechanisms learned from data (in a generative adversarial fashion)
match the theoretically optimal ones. This demonstrates that our framework can
be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special
Issue on Information Theory in Machine Learning and Data Scienc
Evaluating Security and Usability of Profile Based Challenge Questions Authentication in Online Examinations
© 2014 Ullah et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.Student authentication in online learning environments is an increasingly challenging issue due to the inherent absence of physical interaction with online users and potential security threats to online examinations. This study is part of ongoing research on student authentication in online examinations evaluating the potential benefits of using challenge questions. The authors developed a Profile Based Authentication Framework (PBAF), which utilises challenge questions for students’ authentication in online examinations. This paper examines the findings of an empirical study in which 23 participants used the PBAF including an abuse case security analysis of the PBAF approach. The overall usability analysis suggests that the PBAF is efficient, effective and usable. However, specific questions need replacement with suitable alternatives due to usability challenges. The results of the current research study suggest that memorability, clarity of questions, syntactic variation and question relevance can cause usability issues leading to authentication failure. A configurable traffic light system was designed and implemented to improve the usability of challenge questions. The security analysis indicates that the PBAF is resistant to informed guessing in general, however, specific questions were identified with security issues. The security analysis identifies challenge questions with potential risks of informed guessing by friends and colleagues. The study was performed with a small number of participants in a simulation online course and the results need to be verified in a real educational context on a larger sample sizePeer reviewedFinal Published versio
Privacy Preserving Multi-Server k-means Computation over Horizontally Partitioned Data
The k-means clustering is one of the most popular clustering algorithms in
data mining. Recently a lot of research has been concentrated on the algorithm
when the dataset is divided into multiple parties or when the dataset is too
large to be handled by the data owner. In the latter case, usually some servers
are hired to perform the task of clustering. The dataset is divided by the data
owner among the servers who together perform the k-means and return the cluster
labels to the owner. The major challenge in this method is to prevent the
servers from gaining substantial information about the actual data of the
owner. Several algorithms have been designed in the past that provide
cryptographic solutions to perform privacy preserving k-means. We provide a new
method to perform k-means over a large set using multiple servers. Our
technique avoids heavy cryptographic computations and instead we use a simple
randomization technique to preserve the privacy of the data. The k-means
computed has exactly the same efficiency and accuracy as the k-means computed
over the original dataset without any randomization. We argue that our
algorithm is secure against honest but curious and passive adversary.Comment: 19 pages, 4 tables. International Conference on Information Systems
Security. Springer, Cham, 201
A Privacy Preserving Framework for RFID Based Healthcare Systems
RFID (Radio Frequency IDentification) is anticipated to be a core technology that will be used in many practical applications of our life in near future. It has received considerable attention within the healthcare for almost a decade now. The technology’s promise to efficiently track hospital supplies, medical equipment, medications and patients is an attractive proposition to the healthcare industry. However, the prospect of wide spread use of RFID tags in the healthcare area has also triggered discussions regarding privacy, particularly because RFID data in transit may easily be intercepted and can be send to track its user (owner). In a nutshell, this technology has not really seen its true potential in healthcare industry since privacy concerns raised by the tag bearers are not properly addressed by existing identification techniques. There are two major types of privacy preservation techniques that are required in an RFID based healthcare system—(1) a privacy preserving authentication protocol is required while sensing RFID tags for different identification and monitoring purposes, and (2) a privacy preserving access control mechanism is required to restrict unauthorized access of private information while providing healthcare services using the tag ID. In this paper, we propose a framework (PriSens-HSAC) that makes an effort to address the above mentioned two privacy issues. To the best of our knowledge, it is the first framework to provide increased privacy in RFID based healthcare systems, using RFID authentication along with access control technique
Privacy-Aware MMSE Estimation
We investigate the problem of the predictability of random variable under
a privacy constraint dictated by random variable , correlated with ,
where both predictability and privacy are assessed in terms of the minimum
mean-squared error (MMSE). Given that and are connected via a
binary-input symmetric-output (BISO) channel, we derive the \emph{optimal}
random mapping such that the MMSE of given is minimized while
the MMSE of given is greater than for a
given . We also consider the case where are continuous
and is restricted to be an additive noise channel.Comment: 9 pages, 3 figure
- …