519,944 research outputs found
Minimization search method for data inversion
Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables
Private Multiplicative Weights Beyond Linear Queries
A wide variety of fundamental data analyses in machine learning, such as
linear and logistic regression, require minimizing a convex function defined by
the data. Since the data may contain sensitive information about individuals,
and these analyses can leak that sensitive information, it is important to be
able to solve convex minimization in a privacy-preserving way.
A series of recent results show how to accurately solve a single convex
minimization problem in a differentially private manner. However, the same data
is often analyzed repeatedly, and little is known about solving multiple convex
minimization problems with differential privacy. For simpler data analyses,
such as linear queries, there are remarkable differentially private algorithms
such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS
2010) that accurately answer exponentially many distinct queries. In this work,
we extend these results to the case of convex minimization and show how to give
accurate and differentially private solutions to *exponentially many* convex
minimization problems on a sensitive dataset
Low-rank Matrix Completion using Alternating Minimization
Alternating minimization represents a widely applicable and empirically
successful approach for finding low-rank matrices that best fit the given data.
For example, for the problem of low-rank matrix completion, this method is
believed to be one of the most accurate and efficient, and formed a major
component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is
written in a bi-linear form, i.e. ; the algorithm then alternates
between finding the best and the best . Typically, each alternating step
in isolation is convex and tractable. However the overall problem becomes
non-convex and there has been almost no theoretical understanding of when this
approach yields a good result.
In this paper we present first theoretical analysis of the performance of
alternating minimization for matrix completion, and the related problem of
matrix sensing. For both these problems, celebrated recent results have shown
that they become well-posed and tractable once certain (now standard)
conditions are imposed on the problem. We show that alternating minimization
also succeeds under similar conditions. Moreover, compared to existing results,
our paper shows that alternating minimization guarantees faster (in particular,
geometric) convergence to the true matrix, while allowing a simpler analysis
- …
