409 research outputs found
An Algorithmic Theory of Dependent Regularizers, Part 1: Submodular Structure
We present an exploration of the rich theoretical connections between several
classes of regularized models, network flows, and recent results in submodular
function theory. This work unifies key aspects of these problems under a common
theory, leading to novel methods for working with several important models of
interest in statistics, machine learning and computer vision.
In Part 1, we review the concepts of network flows and submodular function
optimization theory foundational to our results. We then examine the
connections between network flows and the minimum-norm algorithm from
submodular optimization, extending and improving several current results. This
leads to a concise representation of the structure of a large class of pairwise
regularized models important in machine learning, statistics and computer
vision.
In Part 2, we describe the full regularization path of a class of penalized
regression problems with dependent variables that includes the graph-guided
LASSO and total variation constrained models. This description also motivates a
practical algorithm. This allows us to efficiently find the regularization path
of the discretized version of TV penalized models. Ultimately, our new
algorithms scale up to high-dimensional problems with millions of variables
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Precoder Design for Physical Layer Multicasting
This paper studies the instantaneous rate maximization and the weighted sum
delay minimization problems over a K-user multicast channel, where multiple
antennas are available at the transmitter as well as at all the receivers.
Motivated by the degree of freedom optimality and the simplicity offered by
linear precoding schemes, we consider the design of linear precoders using the
aforementioned two criteria. We first consider the scenario wherein the linear
precoder can be any complex-valued matrix subject to rank and power
constraints. We propose cyclic alternating ascent based precoder design
algorithms and establish their convergence to respective stationary points.
Simulation results reveal that our proposed algorithms considerably outperform
known competing solutions. We then consider a scenario in which the linear
precoder can be formed by selecting and concatenating precoders from a given
finite codebook of precoding matrices, subject to rank and power constraints.
We show that under this scenario, the instantaneous rate maximization problem
is equivalent to a robust submodular maximization problem which is strongly NP
hard. We propose a deterministic approximation algorithm and show that it
yields a bicriteria approximation. For the weighted sum delay minimization
problem we propose a simple deterministic greedy algorithm, which at each step
entails approximately maximizing a submodular set function subject to multiple
knapsack constraints, and establish its performance guarantee.Comment: 37 pages, 8 figures, submitted to IEEE Trans. Signal Pro
- …