35 research outputs found
Transfer of payment credentials between devices
Payment credentials for use in mobile wallets are stored on a user’s mobile device through a provisioning process. When a user switches devices, the new mobile device has to be provisioned with the user’s payment credentials to enable mobile wallet functionality on the new device. Such provisioning can be cumbersome. This disclosure describes secure transfer of payment credentials between mobile devices by physically tapping the devices together. A new mobile device is configured as a payment terminal. The user is instructed to tap the old mobile device against the new mobile device. Upon tapping, the old mobile device transfers payment credentials to the new mobile device. The credentials are verified by the card issuer. Upon verification, the payment credentials are provisioned on the new mobile device. Thus, the techniques enable seamless, rapid, and secure transfer of payment credentials to the new mobile device
The Metric Nearness Problem
Metric nearness refers to the problem of optimally restoring metric properties to
distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric
data can be important in various settings, for example, in clustering, classification, metric-based
indexing, query processing, and graph theoretic approximation algorithms. This paper formulates
and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a “nearest” set
of distances that satisfy the properties of a metric—principally the triangle inequality. For solving
this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative
projection method. An intriguing aspect of the metric nearness problem is that a special case turns
out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and
develops a new algorithm for the latter problem using a primal-dual method. Applications to graph
clustering are provided as an illustration. We include experiments that demonstrate the computational
superiority of triangle fixing over general purpose convex programming software. Finally, we
conclude by suggesting various useful extensions and generalizations to metric nearness
Talek: Private Group Messaging with Hidden Access Patterns
Talek is a private group messaging system that sends messages through potentially untrustworthy servers, while hiding both data content and the communication patterns among its users. Talek explores a new point in the design space of private messaging; it guarantees access sequence indistinguishability, which is among the strongest guarantees in the space, while assuming an anytrust threat model, which is only slightly weaker than the strongest threat model currently found in related work. Our results suggest that this is a pragmatic point in the design space, since it supports strong privacy and good performance: we demonstrate a 3-server Talek cluster that achieves throughput of 9,433 messages/second for 32,000 active users with 1.7-second end-to-end latency. To achieve its security goals without coordination between clients, Talek relies on information-theoretic private information retrieval. To achieve good performance and minimize server-side storage, Talek intro- duces new techniques and optimizations that may be of independent interest, e.g., a novel use of blocked cuckoo hashing and support for private notifications. The latter provide a private, efficient mechanism for users to learn, without polling, which logs have new messages
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
Adaptive Website Design using Caching Algorithms ABSTRACT
Visitors enter a website through a variety of means, including web searches, links from other sites, and personal bookmarks. In some cases the first page loaded satisfies the visitor’s needs and no additional navigation is necessary. In other cases, however, the visitor is better served by content located elsewhere on the site found by navigating links. If the path between a user’s current location and his eventual goal is circuitous, then the user may never reach that goal or will have to exert considerable effort to reach it. By mining site access logs, we can draw conclusions of the form “users who load page p are likely to later load page q. ” If there is no direct link from p to q, then it would be advantageous to provide one. The process of providing links to users’ eventual goals while skipping over the in-between pages is called shortcutting. Existing algorithms for shortcutting require substantial offline training, which make them unable to adapt when access patterns change between training sessions. We present improved online algorithms for shortcut link selection that are based on a novel analogy drawn between shortcutting and caching. In the same way that cache algorithms predict which memory pages will be accessed in the future, our algorithms predict which web pages will be accessed in the future. Our algorithms are very efficient and are able to consider accesses over a long period of time, but give extra weight to recent accesses. Our experiments show significant improvement in the utility of shortcut links selected by our algorithm as compared to those selected by existing algorithms
Privacy-Preserving Computation for Data Mining
textAs data mining matures as a field and develops more powerful algorithms for
discovering and exploiting patterns in data, the amount of data about individuals
that is collected and stored continues to rapidly increase. This increase
in data heightens concerns that data mining violates individual privacy. The
goal of data mining is to derive aggregate conclusions, which should not reveal
sensitive information. However, the data-mining algorithms run on databases
containing information about individuals which may be sensitive. The goal
of privacy-preserving data mining is to provide high-quality aggregate conclusions
while protecting the privacy of the constituent individuals.
The field of "privacy-preserving data mining" encompasses a wide variety
of different techniques and approaches, and considers many different threat
and trust models. Some techniques use perturbation, where noise is added (either
directly to the database that is the input to the algorithm or to the output
of queries) to obscure values of sensitive attributes; some use generalization, where identifying attributes are given less specific values; and some use cryp-
tography, where joint computations between multiple parties are performed
on encrypted data to hide inputs. Because these approaches are applied to
different scenarios with different threat models, their overall e ectiveness and
privacy properties are incomparable.
In this thesis I take a pragmatic approach to privacy-preserving data
mining and attempt to determine which techniques are suitable to real-world
problems that a data miner might wish to solve, such as evaluating and learning
decision-tree classifiers. I show that popular techniques for sanitizing
databases prior to publication either fail to provide any meaningful privacy
guarantees, or else degrade the data to the point of having only negligible
data-mining utility.
Cryptographic techniques for secure multi-party computation are a natural
alternative to sanitized data publication, and guarantee the privacy of
inputs by performing computations on encrypted data. Because of its heavy
reliance on public-key cryptography, it is conventionally thought to be too
slow to apply to real-world problems. I show that tailor-made protocols for
specific data-mining problems can be made fast enough to run on real-world
problems, and I strengthen this claim with empirical runtime analysis using
prototype implementations. I also expand the use of secure computation beyond
its traditional scope of applying a known algorithm to private inputs by
showing how it can be used to e ciently apply a private algorithm, chosen
from a specific class of algorithms, to a private input.Computer Science
Efficient Anonymity-Preserving Data Collection
The output of a data mining algorithm is only as good as its inputs, and individuals are often unwilling to provide accurate data about sensitive topics such as medical history and personal finance. Individuals may be willing to share their data, but only if they are assured that it will be used in an aggregate study and that it cannot be linked back to them. Protocols for anonymity-preserving data collection provide this assurance, in the absence of trusted parties, by allowing a set of mutually distrustful respondents to anonymously contribute data to an untrusted data miner. To effectively provide anonymity, a data collection protocol must be collusion resistant, which means that even if all dishonest respondents collude with a dishonest data miner in an attempt to learn the associations between honest respondents and their responses, they will be unable to do so. To achieve collusion resistance, previously proposed protocols for anonymity-preserving data collection have quadratically many communication rounds in the number of respondents, and employ (sometimes incorrectly) complicated cryptographic techniques such as zero-knowledge proofs. We describe a new protocol for anonymity-preserving, collusion resistant data collection. Our protocol has linearly many communication rounds, and achieves collusion resistance without relying on zero-knowledge proofs. This makes it especially suitable for data mining scenarios with a large number of respondents
ABSTRACT Efficient Anonymity-Preserving Data Collection
The output of a data mining algorithm is only as good as its inputs, and individuals are often unwilling to provide accurate data about sensitive topics such as medical history and personal finance. Individuals may be willing to share their data, but only if they are assured that it will be used in an aggregate study and that it cannot be linked back to them. Protocols for anonymity-preserving data collection provide this assurance, in the absence of trusted parties, by allowing a set of mutually distrustful respondents to anonymously contribute data to an untrusted data miner. To effectively provide anonymity, a data collection protocol must be collusion resistant, which means that even if all dishonest respondents collude with a dishonest data miner in an attempt to learn the associations between honest respondents and their responses, they will be unable to do so. To achieve collusion resistance, previously proposed protocols for anonymity-preserving data collection have quadratically many communication rounds in the number of respondents, and employ (sometimes incorrectly) complicated cryptographic techniques such as zero-knowledge proofs. We describe a new protocol for anonymity-preserving, collusion resistant data collection. Our protocol has linearly many communication rounds, and achieves collusion resistance without relying on zero-knowledge proofs. This makes it especially suitable for data mining scenarios with a large number of respondents