19 research outputs found
From Adaptive Query Release to Machine Unlearning
We formalize the problem of machine unlearning as design of efficient
unlearning algorithms corresponding to learning algorithms which perform a
selection of adaptive queries from structured query classes. We give efficient
unlearning algorithms for linear and prefix-sum query classes. As applications,
we show that unlearning in many problems, in particular, stochastic convex
optimization (SCO), can be reduced to the above, yielding improved guarantees
for the problem. In particular, for smooth Lipschitz losses and any ,
our results yield an unlearning algorithm with excess population risk of
with unlearning
query (gradient) complexity , where is the model dimensionality and is the initial
number of samples. For non-smooth Lipschitz losses, we give an unlearning
algorithm with excess population risk with the
same unlearning query (gradient) complexity. Furthermore, in the special case
of Generalized Linear Models (GLMs), such as those in linear and logistic
regression, we get dimension-independent rates of and for smooth Lipschitz
and non-smooth Lipschitz losses respectively. Finally, we give generalizations
of the above from one unlearning request to \textit{dynamic} streams consisting
of insertions and deletions.Comment: Accepted to ICML 202
Improved Algorithms for Time Decay Streams
In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a coreset, a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions.
We also consider the exponential time decay model for k-median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores O(k log(h Delta)+h) points where h is the half-life of the decay function and Delta is the aspect ratio of the dataset. Our techniques extend to k-means clustering and M-estimators as well
Statistical Learning via Stochastic Optimization under Data Privacy Considerations
In this dissertation, we study statistical learning, formulated as stochastic optimization problems, under modern constraints motivated by data privacy considerations. The goal is to understand the statistical and computational complexity in algorithm design for fundamental classes of problems.
The first part concerns differential privacy, which, in recent years, has emerged as the de-facto standard for privacy-preserving data analysis. We study design of differentially private algorithms for (a). supervised learning of linear predictors with convex losses, also known as convex generalized linear models, and (b). non-convex optimization, where the goal is to approximate stationary points of the risk function. Our derived guarantees for the proposed algorithms are, as of yet, the best-known, and in most cases, are shown to be nearly optimal, in the worst case.
The second part concerns the problem of machine unlearning. The goal here is to efficiently update a trained model under requests to unlearn a data point in the training dataset. We delve into the problem for widely-studied classes of convex losses: smooth/non-smooth settings and generalized linear models. We propose learning and corresponding unlearning algorithms, which are (non-trivially) accurate and efficient. Further, we extend our techniques, to unlearn general structured iterative procedures, and a streaming setting, where the unlearning requests arrive sequentially
Private Federated Learning with Autotuned Compression
We propose new techniques for reducing communication in private federated
learning without the need for setting or tuning compression rates. Our
on-the-fly methods automatically adjust the compression rate based on the error
induced during training, while maintaining provable privacy guarantees through
the use of secure aggregation and differential privacy. Our techniques are
provably instance-optimal for mean estimation, meaning that they can adapt to
the ``hardness of the problem" with minimal interactivity. We demonstrate the
effectiveness of our approach on real-world datasets by achieving favorable
compression rates without the need for tuning.Comment: Accepted to ICML 202
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
We study private empirical risk minimization (ERM) problem for losses
satisfying the -Kurdyka-{\L}ojasiewicz (KL) condition. The
Polyak-{\L}ojasiewicz (PL) condition is a special case of this condition when
. Specifically, we study this problem under the constraint of
zero-concentrated differential privacy (zCDP). When and the
loss function is Lipschitz and smooth over a sufficiently large region, we
provide a new algorithm based on variance reduced gradient descent that
achieves the rate
on the
excess empirical risk, where is the dataset size and is the dimension.
We further show that this rate is nearly optimal. When and the
loss is instead Lipschitz and weakly convex, we show it is possible to achieve
the rate
with a private implementation of the proximal point method. When the KL
parameters are unknown, we provide a novel modification and analysis of the
noisy gradient descent algorithm and show that this algorithm achieves a rate
of
adaptively, which is nearly optimal when . We further show that,
without assuming the KL condition, the same gradient descent algorithm can
achieve fast convergence to a stationary point when the gradient stays
sufficiently large during the run of the algorithm. Specifically, we show that
this algorithm can approximate stationary points of Lipschitz, smooth (and
possibly nonconvex) objectives with rate as fast as
and never worse than
. The latter
rate matches the best known rate for methods that do not rely on variance
reduction
Recommended from our members
A COMPARISON OF ART PROGRAM EVALUATION RATINGS BY SELF AND VISITING TEAMS IN SELECTED NORTH CENTRAL ASSOCIATION HIGH SCHOOLS (ARIZONA).
The investigation was concerned with 31 Arizona high schools which were members of the North Central Association of Colleges and Schools and which had employed the "Evaluative Criteria," 5th Edition to evaluate themselves and which also had a visiting North Central Team evaluation. The study focused on the art section of each school's evaluation. It sought to ascertain differences in ratings between the schools' self-evaluation and the ratings of the visiting North Central Teams on the 31 evaluation items of the Art Section. A theoretical framework of six categories was constructed. This included: Fundamentals, Self-Actualization, Evaluation, Progress, Support, and General Evaluation. Related literature was reviewed for each category to provide a backdrop of concepts with which to consider the collected data. Each of the 31 evaluation items was subsumed under one or another of these categories for systematic study and reporting. The data were analyzed by use of the product-moment correlation coefficient, and the obtained indices of relationship were examined for their significance at the .05 alpha level. A second analysis involving the statistics was applied to the two sets of obtained ratings across the six categories. The factors of school size, i.e., small and large schools, and geographic location, i.e., rural and urban, were also considered statistically. It was found that there was a significant positive set of relationships across the six categories of the theoretical framework between how the schools rated themselves in the art area and how the visiting North Central teams rated the schools in the art area. School size appeared to be a factor in only two of the six categories, i.e., Self-Evaluation and General Evaluation, while geographic location appeared to be a factor in the three categories of Self-Actualization, Evaluation, and Support