37 research outputs found
Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
This paper studies the long-existing idea of adding a nice smooth function to
"smooth" a non-differentiable objective function in the context of sparse
optimization, in particular, the minimization of
, where is a vector, as well as the
minimization of , where is a matrix and
and are the nuclear and Frobenius norms of ,
respectively. We show that they can efficiently recover sparse vectors and
low-rank matrices. In particular, they enjoy exact and stable recovery
guarantees similar to those known for minimizing and under
the conditions on the sensing operator such as its null-space property,
restricted isometry property, spherical section property, or RIPless property.
To recover a (nearly) sparse vector , minimizing
returns (nearly) the same solution as minimizing
almost whenever . The same relation also
holds between minimizing and minimizing
for recovering a (nearly) low-rank matrix , if . Furthermore, we show that the linearized Bregman algorithm for
minimizing subject to enjoys global
linear convergence as long as a nonzero solution exists, and we give an
explicit rate of convergence. The convergence property does not require a
solution solution or any properties on . To our knowledge, this is the best
known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author
Almost-Euclidean subspaces of via tensor products: a simple approach to randomness reduction
It has been known since 1970's that the N-dimensional -space contains
nearly Euclidean subspaces whose dimension is . However, proofs of
existence of such subspaces were probabilistic, hence non-constructive, which
made the results not-quite-suitable for subsequently discovered applications to
high-dimensional nearest neighbor search, error-correcting codes over the
reals, compressive sensing and other computational problems. In this paper we
present a "low-tech" scheme which, for any , allows to exhibit nearly
Euclidean -dimensional subspaces of while using only
random bits. Our results extend and complement (particularly) recent work
by Guruswami-Lee-Wigderson. Characteristic features of our approach include (1)
simplicity (we use only tensor products) and (2) yielding "almost Euclidean"
subspaces with arbitrarily small distortions.Comment: 11 pages; title change, abstract and references added, other minor
change
On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation
We study classic streaming and sparse recovery problems using deterministic
linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the
latter also being known as l1-heavy hitters), norm estimation, and approximate
inner product. We focus on devising a fixed matrix A in R^{m x n} and a
deterministic recovery/estimation procedure which work for all possible input
vectors simultaneously. Our results improve upon existing work, the following
being our main contributions:
* A proof that linf/l1 sparse recovery and inner product estimation are
equivalent, and that incoherent matrices can be used to solve both problems.
Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log
n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms
by making use of the Fast Johnson-Lindenstrauss transform. Both our running
times and number of measurements improve upon previous work. We can also obtain
better error guarantees than previous work in terms of a smaller tail of the
input vector.
* A new lower bound for the number of linear measurements required to solve
l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are
required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where
x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude.
* A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of
measurements required to solve deterministic norm estimation, i.e., to recover
|x|_2 +/- eps|x|_1.
For all the problems we study, tight bounds are already known for the
randomized complexity from previous work, except in the case of l1/l1 sparse
recovery, where a nearly tight bound is known. Our work thus aims to study the
deterministic complexities of these problems
Precision Tests of the Standard Model
30 páginas, 11 figuras, 11 tablas.-- Comunicación presentada al 25º Winter Meeting on Fundamental Physics celebrado del 3 al 8 de MArzo de 1997 en Formigal (España).Precision measurements of electroweak observables provide stringent tests of the Standard Model structure and an accurate determination of its parameters. An overview of the present experimental status is presented.This work has been supported in part
by CICYT (Spain) under grant No. AEN-96-1718.Peer reviewe
Strategic Learning for Active, Adaptive, and Autonomous Cyber Defense
The increasing instances of advanced attacks call for a new defense paradigm
that is active, autonomous, and adaptive, named as the \texttt{`3A'} defense
paradigm. This chapter introduces three defense schemes that actively interact
with attackers to increase the attack cost and gather threat information, i.e.,
defensive deception for detection and counter-deception, feedback-driven Moving
Target Defense (MTD), and adaptive honeypot engagement. Due to the cyber
deception, external noise, and the absent knowledge of the other players'
behaviors and goals, these schemes possess three progressive levels of
information restrictions, i.e., from the parameter uncertainty, the payoff
uncertainty, to the environmental uncertainty. To estimate the unknown and
reduce uncertainty, we adopt three different strategic learning schemes that
fit the associated information restrictions. All three learning schemes share
the same feedback structure of sensation, estimation, and actions so that the
most rewarding policies get reinforced and converge to the optimal ones in
autonomous and adaptive fashions. This work aims to shed lights on proactive
defense strategies, lay a solid foundation for strategic learning under
incomplete information, and quantify the tradeoff between the security and
costs.Comment: arXiv admin note: text overlap with arXiv:1906.1218