88,607 research outputs found
Jump-sparse and sparse recovery using Potts functionals
We recover jump-sparse and sparse signals from blurred incomplete data
corrupted by (possibly non-Gaussian) noise using inverse Potts energy
functionals. We obtain analytical results (existence of minimizers, complexity)
on inverse Potts functionals and provide relations to sparsity problems. We
then propose a new optimization method for these functionals which is based on
dynamic programming and the alternating direction method of multipliers (ADMM).
A series of experiments shows that the proposed method yields very satisfactory
jump-sparse and sparse reconstructions, respectively. We highlight the
capability of the method by comparing it with classical and recent approaches
such as TV minimization (jump-sparse signals), orthogonal matching pursuit,
iterative hard thresholding, and iteratively reweighted minimization
(sparse signals)
Computational guidance using sparse Gauss-Hermite quadrature differential dynamic programming
This paper proposes a new computational guidance algorithm using differential dynamic programming and sparse Gauss-Hermite quadrature rule. By the application of sparse Gauss-Hermite quadrature rule, numerical differentiation in the calculation of Hessian matrices and gradients in differential dynamic programming is avoided. Based on the new differential dynamic programming approach developed, a three-dimensional computational algorithm is proposed to control the impact angle and impact time for an air-to-surface interceptor. Extensive numerical simulations are performed to show the effectiveness of the proposed approach
Generalized Dynamic Programming Principle and Sparse Mean-Field Control Problems
In this paper we study optimal control problems in Wasserstein spaces, which
are suitable to describe macroscopic dynamics of multi-particle systems. The
dynamics is described by a parametrized continuity equation, in which the
Eulerian velocity field is affine w.r.t. some variables. Our aim is to minimize
a cost functional which includes a control norm, thus enforcing a \emph{control
sparsity} constraint. More precisely, we consider a nonlocal restriction on the
total amount of control that can be used depending on the overall state of the
evolving mass. We treat in details two main cases: an instantaneous constraint
on the control applied to the evolving mass and a cumulative constraint, which
depends also on the amount of control used in previous times. For both
constraints, we prove the existence of optimal trajectories for general cost
functions and that the value function is viscosity solution of a suitable
Hamilton-Jacobi-Bellmann equation. Finally, we discuss an abstract Dynamic
Programming Principle, providing further applications in the Appendix.Comment: This manuscript version is made available under the CC-BY-NC-ND 4.0
license http://creativecommons.org/licenses/by-nc-nd/4.0
Exact algorithms for the order picking problem
Order picking is the problem of collecting a set of products in a warehouse
in a minimum amount of time. It is currently a major bottleneck in supply-chain
because of its cost in time and labor force. This article presents two exact
and effective algorithms for this problem. Firstly, a sparse formulation in
mixed-integer programming is strengthened by preprocessing and valid
inequalities. Secondly, a dynamic programming approach generalizing known
algorithms for two or three cross-aisles is proposed and evaluated
experimentally. Performances of these algorithms are reported and compared with
the Traveling Salesman Problem (TSP) solver Concorde
Recommended from our members
Sparse Dynamic Programming I: Linear Cost Functions
We consider dynamic programming solutions to a number of different recurrences for sequence comparison and for RNA secondary structure prediction. These recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. We give efficient algorithms for these problems, when the weight functions used in the recurrences are taken to be linear. Our algorithms reduce the best known bounds by a factor almost linear in the density of the problems: when the problems are sparse this results in a substantial speed-up
Infinite horizon sparse optimal control
A class of infinite horizon optimal control problems involving -type
cost functionals with is discussed. The existence of optimal
controls is studied for both the convex case with and the nonconvex case
with , and the sparsity structure of the optimal controls promoted by
the -type penalties is analyzed. A dynamic programming approach is
proposed to numerically approximate the corresponding sparse optimal
controllers
Efficient Elastic Net Regularization for Sparse Linear Models
This paper presents an algorithm for efficient training of sparse linear
models with elastic net regularization. Extending previous work on delayed
updates, the new algorithm applies stochastic gradient updates to non-zero
features only, bringing weights current as needed with closed-form updates.
Closed-form delayed updates for the , , and rarely used
regularizers have been described previously. This paper provides
closed-form updates for the popular squared norm and elastic net
regularizers.
We provide dynamic programming algorithms that perform each delayed update in
constant time. The new and elastic net methods handle both fixed and
varying learning rates, and both standard {stochastic gradient descent} (SGD)
and {forward backward splitting (FoBoS)}. Experimental results show that on a
bag-of-words dataset with features, but only nonzero features on
average per training example, the dynamic programming method trains a logistic
regression classifier with elastic net regularization over times faster
than otherwise
- …