88,607 research outputs found

    Jump-sparse and sparse recovery using Potts functionals

    Full text link
    We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted 1\ell^1 minimization (sparse signals)

    Computational guidance using sparse Gauss-Hermite quadrature differential dynamic programming

    Get PDF
    This paper proposes a new computational guidance algorithm using differential dynamic programming and sparse Gauss-Hermite quadrature rule. By the application of sparse Gauss-Hermite quadrature rule, numerical differentiation in the calculation of Hessian matrices and gradients in differential dynamic programming is avoided. Based on the new differential dynamic programming approach developed, a three-dimensional computational algorithm is proposed to control the impact angle and impact time for an air-to-surface interceptor. Extensive numerical simulations are performed to show the effectiveness of the proposed approach

    Generalized Dynamic Programming Principle and Sparse Mean-Field Control Problems

    Get PDF
    In this paper we study optimal control problems in Wasserstein spaces, which are suitable to describe macroscopic dynamics of multi-particle systems. The dynamics is described by a parametrized continuity equation, in which the Eulerian velocity field is affine w.r.t. some variables. Our aim is to minimize a cost functional which includes a control norm, thus enforcing a \emph{control sparsity} constraint. More precisely, we consider a nonlocal restriction on the total amount of control that can be used depending on the overall state of the evolving mass. We treat in details two main cases: an instantaneous constraint on the control applied to the evolving mass and a cumulative constraint, which depends also on the amount of control used in previous times. For both constraints, we prove the existence of optimal trajectories for general cost functions and that the value function is viscosity solution of a suitable Hamilton-Jacobi-Bellmann equation. Finally, we discuss an abstract Dynamic Programming Principle, providing further applications in the Appendix.Comment: This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0

    Exact algorithms for the order picking problem

    Full text link
    Order picking is the problem of collecting a set of products in a warehouse in a minimum amount of time. It is currently a major bottleneck in supply-chain because of its cost in time and labor force. This article presents two exact and effective algorithms for this problem. Firstly, a sparse formulation in mixed-integer programming is strengthened by preprocessing and valid inequalities. Secondly, a dynamic programming approach generalizing known algorithms for two or three cross-aisles is proposed and evaluated experimentally. Performances of these algorithms are reported and compared with the Traveling Salesman Problem (TSP) solver Concorde

    Infinite horizon sparse optimal control

    Get PDF
    A class of infinite horizon optimal control problems involving LpL^p-type cost functionals with 0<p10<p\leq 1 is discussed. The existence of optimal controls is studied for both the convex case with p=1p=1 and the nonconvex case with 0<p<10<p<1, and the sparsity structure of the optimal controls promoted by the LpL^p-type penalties is analyzed. A dynamic programming approach is proposed to numerically approximate the corresponding sparse optimal controllers

    Efficient Elastic Net Regularization for Sparse Linear Models

    Full text link
    This paper presents an algorithm for efficient training of sparse linear models with elastic net regularization. Extending previous work on delayed updates, the new algorithm applies stochastic gradient updates to non-zero features only, bringing weights current as needed with closed-form updates. Closed-form delayed updates for the 1\ell_1, \ell_{\infty}, and rarely used 2\ell_2 regularizers have been described previously. This paper provides closed-form updates for the popular squared norm 22\ell^2_2 and elastic net regularizers. We provide dynamic programming algorithms that perform each delayed update in constant time. The new 22\ell^2_2 and elastic net methods handle both fixed and varying learning rates, and both standard {stochastic gradient descent} (SGD) and {forward backward splitting (FoBoS)}. Experimental results show that on a bag-of-words dataset with 260,941260,941 features, but only 8888 nonzero features on average per training example, the dynamic programming method trains a logistic regression classifier with elastic net regularization over 20002000 times faster than otherwise
    corecore