809 research outputs found
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via penalization
has been proposed for cases where the solution is known to be (almost) sparse.
One way to obtain the minimizer of such an penalized functional is via
an iterative soft-thresholding algorithm. We propose an alternative
implementation to -constraints, using a gradient method, with
projection on -balls. The corresponding algorithm uses again iterative
soft-thresholding, now with a variable thresholding parameter. We also propose
accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these
projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page
A Survey of Symbolic Execution Techniques
Many security and software testing applications require checking whether
certain properties of a program hold for any possible usage scenario. For
instance, a tool for identifying software vulnerabilities may need to rule out
the existence of any backdoor to bypass a program's authentication. One
approach would be to test the program using different, possibly random inputs.
As the backdoor may only be hit for very specific program workloads, automated
exploration of the space of possible inputs is of the essence. Symbolic
execution provides an elegant solution to the problem, by systematically
exploring many possible execution paths at the same time without necessarily
requiring concrete inputs. Rather than taking on fully specified input values,
the technique abstractly represents them as symbols, resorting to constraint
solvers to construct actual instances that would cause property violations.
Symbolic execution has been incubated in dozens of tools developed over the
last four decades, leading to major practical breakthroughs in a number of
prominent software reliability applications. The goal of this survey is to
provide an overview of the main ideas, challenges, and solutions developed in
the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing
Surveys. If you are considering citing this survey, we would appreciate if you
could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing
this survey, we would appreciate if you could use the following BibTeX entry:
http://goo.gl/Hf5Fv
Source Coding Optimization for Distributed Average Consensus
Consensus is a common method for computing a function of the data distributed
among the nodes of a network. Of particular interest is distributed average
consensus, whereby the nodes iteratively compute the sample average of the data
stored at all the nodes of the network using only near-neighbor communications.
In real-world scenarios, these communications must undergo quantization, which
introduces distortion to the internode messages. In this thesis, a model for
the evolution of the network state statistics at each iteration is developed
under the assumptions of Gaussian data and additive quantization error. It is
shown that minimization of the communication load in terms of aggregate source
coding rate can be posed as a generalized geometric program, for which an
equivalent convex optimization can efficiently solve for the global minimum.
Optimization procedures are developed for rate-distortion-optimal vector
quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform
quantization. Numerical results demonstrate the performance of these
approaches. For small numbers of iterations, the fixed-rate optimizations are
verified using exhaustive search. Comparison to the prior art suggests
competitive performance under certain circumstances but strongly motivates the
incorporation of more sophisticated coding strategies, such as differential,
predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State
Universit
Inference of Probabilistic Programs with Moment-Matching Gaussian Mixtures
Computing the posterior distribution of a probabilistic program is a hard
task for which no one-fit-for-all solution exists. We propose Gaussian
Semantics, which approximates the exact probabilistic semantics of a bounded
program by means of Gaussian mixtures. It is parametrized by a map that
associates each program location with the moment order to be matched in the
approximation. We provide two main contributions. The first is a universal
approximation theorem stating that, under mild conditions, Gaussian Semantics
can approximate the exact semantics arbitrarily closely. The second is an
approximation that matches up to second-order moments analytically in face of
the generally difficult problem of matching moments of Gaussian mixtures with
arbitrary moment order. We test our second-order Gaussian approximation (SOGA)
on a number of case studies from the literature. We show that it can provide
accurate estimates in models not supported by other approximation methods or
when exact symbolic techniques fail because of complex expressions or
non-simplified integrals. On two notable classes of problems, namely
collaborative filtering and programs involving mixtures of continuous and
discrete distributions, we show that SOGA significantly outperforms alternative
techniques in terms of accuracy and computational time
Convergence of Entropic Schemes for Optimal Transport and Gradient Flows
Replacing positivity constraints by an entropy barrier is popular to
approximate solutions of linear programs. In the special case of the optimal
transport problem, this technique dates back to the early work of
Schr\"odinger. This approach has recently been used successfully to solve
optimal transport related problems in several applied fields such as imaging
sciences, machine learning and social sciences. The main reason for this
success is that, in contrast to linear programming solvers, the resulting
algorithms are highly parallelizable and take advantage of the geometry of the
computational grid (e.g. an image or a triangulated mesh). The first
contribution of this article is the proof of the -convergence of the
entropic regularized optimal transport problem towards the Monge-Kantorovich
problem for the squared Euclidean norm cost function. This implies in
particular the convergence of the optimal entropic regularized transport plan
towards an optimal transport plan as the entropy vanishes. Optimal transport
distances are also useful to define gradient flows as a limit of implicit Euler
steps according to the transportation distance. Our second contribution is a
proof that implicit steps according to the entropic regularized distance
converge towards the original gradient flow when both the step size and the
entropic penalty vanish (in some controlled way)
- …