2 research outputs found
Scaling Up Differentially Private LASSO Regularized Logistic Regression via Faster Frank-Wolfe Iterations
To the best of our knowledge, there are no methods today for training
differentially private regression models on sparse input data. To remedy this,
we adapt the Frank-Wolfe algorithm for penalized linear regression to be
aware of sparse inputs and to use them effectively. In doing so, we reduce the
training time of the algorithm from to
, where is the number of
iterations and a sparsity rate of a dataset with rows and features.
Our results demonstrate that this procedure can reduce runtime by a factor of
up to , depending on the value of the privacy parameter
and the sparsity of the dataset.Comment: To appear in the 37th Conference on Neural Information Processing
Systems (NeurIPS 2023
Conditional Gradient Methods
The purpose of this survey is to serve both as a gentle introduction and a
coherent overview of state-of-the-art Frank--Wolfe algorithms, also called
conditional gradient algorithms, for function minimization. These algorithms
are especially useful in convex optimization when linear optimization is
cheaper than projections.
The selection of the material has been guided by the principle of
highlighting crucial ideas as well as presenting new approaches that we believe
might become important in the future, with ample citations even of old works
imperative in the development of newer methods. Yet, our selection is sometimes
biased, and need not reflect consensus of the research community, and we have
certainly missed recent important contributions. After all the research area of
Frank--Wolfe is very active, making it a moving target. We apologize sincerely
in advance for any such distortions and we fully acknowledge: We stand on the
shoulder of giants.Comment: 238 pages with many figures. The FrankWolfe.jl Julia package
(https://github.com/ZIB-IOL/FrankWolfe.jl) providces state-of-the-art
implementations of many Frank--Wolfe method