1,051 research outputs found
From Principle to Practice: Vertical Data Minimization for Machine Learning
Aiming to train and deploy predictive models, organizations collect large
amounts of detailed client data, risking the exposure of private information in
the event of a breach. To mitigate this, policymakers increasingly demand
compliance with the data minimization (DM) principle, restricting data
collection to only that data which is relevant and necessary for the task.
Despite regulatory pressure, the problem of deploying machine learning models
that obey DM has so far received little attention. In this work, we address
this challenge in a comprehensive manner. We propose a novel vertical DM (vDM)
workflow based on data generalization, which by design ensures that no
full-resolution client data is collected during training and deployment of
models, benefiting client privacy by reducing the attack surface in case of a
breach. We formalize and study the corresponding problem of finding
generalizations that both maximize data utility and minimize empirical privacy
risk, which we quantify by introducing a diverse set of policy-aligned
adversarial scenarios. Finally, we propose a range of baseline vDM algorithms,
as well as Privacy-aware Tree (PAT), an especially effective vDM algorithm that
outperforms all baselines across several settings. We plan to release our code
as a publicly available library, helping advance the standardization of DM for
machine learning. Overall, we believe our work can help lay the foundation for
further exploration and adoption of DM principles in real-world applications.Comment: Accepted at IEEE S&P 202
Privacy Preserving Data Publishing
Recent years have witnessed increasing interest among researchers in protecting individual privacy in the big data era, involving social media, genomics, and Internet of Things. Recent studies have revealed numerous privacy threats and privacy protection methodologies, that vary across a broad range of applications. To date, however, there exists no powerful methodologies in addressing challenges from: high-dimension data, high-correlation data and powerful attackers.
In this dissertation, two critical problems will be investigated: the prospects and some challenges for elucidating the attack capabilities of attackers in mining individuals’ private information; and methodologies that can be used to protect against such inference attacks, while guaranteeing significant data utility.
First, this dissertation has proposed a series of works regarding inference attacks laying emphasis on protecting against powerful adversaries with auxiliary information. In the context of genomic data, data dimensions and computation feasibility is highly challenging in conducting data analysis. This dissertation proved that the proposed attack can effectively infer the values of the unknown SNPs and traits in linear complexity, which dramatically improve the computation cost compared with traditional methods with exponential computation cost.
Second, putting differential privacy guarantee into high-dimension and high-correlation data remains a challenging problem, due to high-sensitivity, output scalability and signal-to-noise ratio. Consider there are tens-of-millions of genomes in a human DNA, it is infeasible for traditional methods to introduce noise to sanitize genomic data. This dissertation has proposed a series of works and demonstrated that the proposed differentially private method satisfies differential privacy; moreover, data utility is improved compared with the states of the arts by largely lowering data sensitivity.
Third, putting privacy guarantee into social data publishing remains a challenging problem, due to tradeoff requirements between data privacy and utility. This dissertation has proposed a series of works and demonstrated that the proposed methods can effectively realize privacy-utility tradeoff in data publishing.
Finally, two future research topics are proposed. The first topic is about Privacy Preserving Data Collection and Processing for Internet of Things. The second topic is to study Privacy Preserving Big Data Aggregation. They are motivated by the newly proposed data mining, artificial intelligence and cybersecurity methods
Reviewer Integration and Performance Measurement for Malware Detection
We present and evaluate a large-scale malware detection system integrating
machine learning with expert reviewers, treating reviewers as a limited
labeling resource. We demonstrate that even in small numbers, reviewers can
vastly improve the system's ability to keep pace with evolving threats. We
conduct our evaluation on a sample of VirusTotal submissions spanning 2.5 years
and containing 1.1 million binaries with 778GB of raw feature data. Without
reviewer assistance, we achieve 72% detection at a 0.5% false positive rate,
performing comparable to the best vendors on VirusTotal. Given a budget of 80
accurate reviews daily, we improve detection to 89% and are able to detect 42%
of malicious binaries undetected upon initial submission to VirusTotal.
Additionally, we identify a previously unnoticed temporal inconsistency in the
labeling of training datasets. We compare the impact of training labels
obtained at the same time training data is first seen with training labels
obtained months later. We find that using training labels obtained well after
samples appear, and thus unavailable in practice for current training data,
inflates measured detection by almost 20 percentage points. We release our
cluster-based implementation, as well as a list of all hashes in our evaluation
and 3% of our entire dataset.Comment: 20 papers, 11 figures, accepted at the 13th Conference on Detection
of Intrusions and Malware & Vulnerability Assessment (DIMVA 2016
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them
The advent of the Internet of Things (IoT) has brought forth an era of
unprecedented connectivity, with an estimated 80 billion smart devices expected
to be in operation by the end of 2025. These devices facilitate a multitude of
smart applications, enhancing the quality of life and efficiency across various
domains. Machine Learning (ML) serves as a crucial technology, not only for
analyzing IoT-generated data but also for diverse applications within the IoT
ecosystem. For instance, ML finds utility in IoT device recognition, anomaly
detection, and even in uncovering malicious activities. This paper embarks on a
comprehensive exploration of the security threats arising from ML's integration
into various facets of IoT, spanning various attack types including membership
inference, adversarial evasion, reconstruction, property inference, model
extraction, and poisoning attacks. Unlike previous studies, our work offers a
holistic perspective, categorizing threats based on criteria such as adversary
models, attack targets, and key security attributes (confidentiality,
availability, and integrity). We delve into the underlying techniques of ML
attacks in IoT environment, providing a critical evaluation of their mechanisms
and impacts. Furthermore, our research thoroughly assesses 65 libraries, both
author-contributed and third-party, evaluating their role in safeguarding model
and data privacy. We emphasize the availability and usability of these
libraries, aiming to arm the community with the necessary tools to bolster
their defenses against the evolving threat landscape. Through our comprehensive
review and analysis, this paper seeks to contribute to the ongoing discourse on
ML-based IoT security, offering valuable insights and practical solutions to
secure ML models and data in the rapidly expanding field of artificial
intelligence in IoT
- …