8,376 research outputs found
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations
We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
, where is
the norm of an arbitrary comparator and both and are unknown to
the player. This bound is optimal up to terms. When is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.Comment: Proceedings of the 27th Annual Conference on Learning Theory (COLT
2014
An Algorithm for Global Maximization of Secrecy Rates in Gaussian MIMO Wiretap Channels
Optimal signaling for secrecy rate maximization in Gaussian MIMO wiretap
channels is considered. While this channel has attracted a significant
attention recently and a number of results have been obtained, including the
proof of the optimality of Gaussian signalling, an optimal transmit covariance
matrix is known for some special cases only and the general case remains an
open problem. An iterative custom-made algorithm to find a globally-optimal
transmit covariance matrix in the general case is developed in this paper, with
guaranteed convergence to a \textit{global} optimum. While the original
optimization problem is not convex and hence difficult to solve, its minimax
reformulation can be solved via the convex optimization tools, which is
exploited here. The proposed algorithm is based on the barrier method extended
to deal with a minimax problem at hand. Its convergence to a global optimum is
proved for the general case (degraded or not) and a bound for the optimality
gap is given for each step of the barrier method. The performance of the
algorithm is demonstrated via numerical examples. In particular, 20 to 40
Newton steps are already sufficient to solve the sufficient optimality
conditions with very high precision (up to the machine precision level), even
for large systems. Even fewer steps are required if the secrecy capacity is the
only quantity of interest. The algorithm can be significantly simplified for
the degraded channel case and can also be adopted to include the per-antenna
power constraints (instead or in addition to the total power constraint). It
also solves the dual problem of minimizing the total power subject to the
secrecy rate constraint.Comment: accepted by IEEE Transactions on Communication
Relax and Localize: From Value to Algorithms
We show a principled way of deriving online learning algorithms from a
minimax analysis. Various upper bounds on the minimax value, previously thought
to be non-constructive, are shown to yield algorithms. This allows us to
seamlessly recover known methods and to derive new ones. Our framework also
captures such "unorthodox" methods as Follow the Perturbed Leader and the R^2
forecaster. We emphasize that understanding the inherent complexity of the
learning problem leads to the development of algorithms.
We define local sequential Rademacher complexities and associated algorithms
that allow us to obtain faster rates in online learning, similarly to
statistical learning theory. Based on these localized complexities we build a
general adaptive method that can take advantage of the suboptimality of the
observed sequence.
We present a number of new algorithms, including a family of randomized
methods that use the idea of a "random playout". Several new versions of the
Follow-the-Perturbed-Leader algorithms are presented, as well as methods based
on the Littlestone's dimension, efficient methods for matrix completion with
trace norm, and algorithms for the problems of transductive learning and
prediction with static experts
Complete solution of a constrained tropical optimization problem with application to location analysis
We present a multidimensional optimization problem that is formulated and
solved in the tropical mathematics setting. The problem consists of minimizing
a nonlinear objective function defined on vectors over an idempotent semifield
by means of a conjugate transposition operator, subject to constraints in the
form of linear vector inequalities. A complete direct solution to the problem
under fairly general assumptions is given in a compact vector form suitable for
both further analysis and practical implementation. We apply the result to
solve a multidimensional minimax single facility location problem with
Chebyshev distance and with inequality constraints imposed on the feasible
location area.Comment: 20 pages, 3 figure
- …