47,279 research outputs found
A conditional role-involved purpose-based access control model
This paper presents a role-involved conditional purpose-based access control (RCPBAC) model, where a purpose is defined as the intension of data accesses or usages. RCPBAC allows users using some data for certain purpose with conditions. The structure of RCPBAC model is defined and investigated. An algorithm is developed to achieve the compliance computation between access purposes (related to data access) and intended purposes (related to data objects) and is illustrated with role-based access control (RBAC) to support RCPBAC. According to this model, more information from data providers can be extracted while at the same time assuring privacy that maximizes the usability of consumers' data. It extends traditional access control models to a further coverage of privacy preserving in data mining environment as RBAC is one of the most popular approach towards access control to achieve database security and available in database management systems. The
structure helps enterprises to circulate clear privacy promise, to collect and manage user preferences and consent
On the Measurement of Privacy as an Attacker's Estimation Error
A wide variety of privacy metrics have been proposed in the literature to
evaluate the level of protection offered by privacy enhancing-technologies.
Most of these metrics are specific to concrete systems and adversarial models,
and are difficult to generalize or translate to other contexts. Furthermore, a
better understanding of the relationships between the different privacy metrics
is needed to enable more grounded and systematic approach to measuring privacy,
as well as to assist systems designers in selecting the most appropriate metric
for a given application.
In this work we propose a theoretical framework for privacy-preserving
systems, endowed with a general definition of privacy in terms of the
estimation error incurred by an attacker who aims to disclose the private
information that the system is designed to conceal. We show that our framework
permits interpreting and comparing a number of well-known metrics under a
common perspective. The arguments behind these interpretations are based on
fundamental results related to the theories of information, probability and
Bayes decision.Comment: This paper has 18 pages and 17 figure
ConXsense - Automated Context Classification for Context-Aware Access Control
We present ConXsense, the first framework for context-aware access control on
mobile devices based on context classification. Previous context-aware access
control systems often require users to laboriously specify detailed policies or
they rely on pre-defined policies not adequately reflecting the true
preferences of users. We present the design and implementation of a
context-aware framework that uses a probabilistic approach to overcome these
deficiencies. The framework utilizes context sensing and machine learning to
automatically classify contexts according to their security and privacy-related
properties. We apply the framework to two important smartphone-related use
cases: protection against device misuse using a dynamic device lock and
protection against sensory malware. We ground our analysis on a sociological
survey examining the perceptions and concerns of users related to contextual
smartphone security and analyze the effectiveness of our approach with
real-world context data. We also demonstrate the integration of our framework
with the FlaskDroid architecture for fine-grained access control enforcement on
the Android platform.Comment: Recipient of the Best Paper Awar
Privacy as a Public Good
Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy.
But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy
Differential Privacy versus Quantitative Information Flow
Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
of two different entries to originate a certain answer is bound by e^\epsilon.
In the fields of anonymity and information flow there is a similar concern for
controlling information leakage, i.e. limiting the possibility of inferring the
secret information from the observables. In recent years, researchers have
proposed to quantify the leakage in terms of the information-theoretic notion
of mutual information. There are two main approaches that fall in this
category: One based on Shannon entropy, and one based on R\'enyi's min entropy.
The latter has connection with the so-called Bayes risk, which expresses the
probability of guessing the secret. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
the notion of differential privacy is strictly stronger, in the sense that it
implies a bound on the mutual information, but not viceversa
An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices
Statistical agencies face a dual mandate to publish accurate statistics while protecting respondent privacy. Increasing privacy protection requires decreased accuracy. Recognizing this as a resource allocation problem, we propose an economic solution: operate where the marginal cost of increasing privacy equals the marginal benefit. Our model of production, from computer science, assumes data are published using an efficient differentially private algorithm. Optimal choice weighs the demand for accurate statistics against the demand for privacy. Examples from U.S. statistical programs show how our framework can guide decision-making. Further progress requires a better understanding of willingness-to-pay for privacy and statistical accuracy
Synthetic Observational Health Data with GANs: from slow adoption to a boom in medical research and ultimately digital twins?
After being collected for patient care, Observational Health Data (OHD) can
further benefit patient well-being by sustaining the development of health
informatics and medical research. Vast potential is unexploited because of the
fiercely private nature of patient-related data and regulations to protect it.
Generative Adversarial Networks (GANs) have recently emerged as a
groundbreaking way to learn generative models that produce realistic synthetic
data. They have revolutionized practices in multiple domains such as
self-driving cars, fraud detection, digital twin simulations in industrial
sectors, and medical imaging.
The digital twin concept could readily apply to modelling and quantifying
disease progression. In addition, GANs posses many capabilities relevant to
common problems in healthcare: lack of data, class imbalance, rare diseases,
and preserving privacy. Unlocking open access to privacy-preserving OHD could
be transformative for scientific research. In the midst of COVID-19, the
healthcare system is facing unprecedented challenges, many of which of are data
related for the reasons stated above.
Considering these facts, publications concerning GAN applied to OHD seemed to
be severely lacking. To uncover the reasons for this slow adoption, we broadly
reviewed the published literature on the subject. Our findings show that the
properties of OHD were initially challenging for the existing GAN algorithms
(unlike medical imaging, for which state-of-the-art model were directly
transferable) and the evaluation synthetic data lacked clear metrics.
We find more publications on the subject than expected, starting slowly in
2017, and since then at an increasing rate. The difficulties of OHD remain, and
we discuss issues relating to evaluation, consistency, benchmarking, data
modelling, and reproducibility.Comment: 31 pages (10 in previous version), not including references and
glossary, 51 in total. Inclusion of a large number of recent publications and
expansion of the discussion accordingl
- …