55,240 research outputs found
Is There an App for That? Electronic Health Records (EHRs) and a New Environment of Conflict Prevention and Resolution
Katsh discusses the new problems that are a consequence of a new technological environment in healthcare, one that has an array of elements that makes the emergence of disputes likely. Novel uses of technology have already addressed both the problem and its source in other contexts, such as e-commerce, where large numbers of transactions have generated large numbers of disputes. If technology-supported healthcare is to improve the field of medicine, a similar effort at dispute prevention and resolution will be necessary
Tight Lower Bounds for Differentially Private Selection
A pervasive task in the differential privacy literature is to select the
items of "highest quality" out of a set of items, where the quality of each
item depends on a sensitive dataset that must be protected. Variants of this
task arise naturally in fundamental problems like feature selection and
hypothesis testing, and also as subroutines for many sophisticated
differentially private algorithms.
The standard approaches to these tasks---repeated use of the exponential
mechanism or the sparse vector technique---approximately solve this problem
given a dataset of samples. We provide a tight lower
bound for some very simple variants of the private selection problem. Our lower
bound shows that a sample of size is required
even to achieve a very minimal accuracy guarantee.
Our results are based on an extension of the fingerprinting method to sparse
selection problems. Previously, the fingerprinting method has been used to
provide tight lower bounds for answering an entire set of queries, but
often only some much smaller set of queries are relevant. Our extension
allows us to prove lower bounds that depend on both the number of relevant
queries and the total number of queries
Stakeholder involvement, motivation, responsibility, communication: How to design usable security in e-Science
e-Science projects face a difficult challenge in providing access to valuable computational resources, data and software to large communities of distributed users. Oil the one hand, the raison d'etre of the projects is to encourage members of their research communities to use the resources provided. Oil the other hand, the threats to these resources from online attacks require robust and effective Security to mitigate the risks faced. This raises two issues: ensuring that (I) the security mechanisms put in place are usable by the different users of the system, and (2) the security of the overall system satisfies the security needs of all its different stakeholders. A failure to address either of these issues call seriously jeopardise the success of e-Science projects.The aim of this paper is to firstly provide a detailed understanding of how these challenges call present themselves in practice in the development of e-Science applications. Secondly, this paper examines the steps that projects can undertake to ensure that security requirements are correctly identified, and security measures are usable by the intended research community. The research presented in this paper is based Oil four case studies of c-Science projects. Security design traditionally uses expert analysis of risks to the technology and deploys appropriate countermeasures to deal with them. However, these case studies highlight the importance of involving all stakeholders in the process of identifying security needs and designing secure and usable systems.For each case study, transcripts of the security analysis and design sessions were analysed to gain insight into the issues and factors that surround the design of usable security. The analysis concludes with a model explaining the relationships between the most important factors identified. This includes a detailed examination of the roles of responsibility, motivation and communication of stakeholders in the ongoing process of designing usable secure socio-technical systems such as e-Science. (C) 2007 Elsevier Ltd. All rights reserved
Differentially Private Empirical Risk Minimization
Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.Comment: 40 pages, 7 figures, accepted to the Journal of Machine Learning
Researc
- …