1,080 research outputs found
An algorithm for the solution of dynamic linear programs
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme
Experiments with Active-Set LP Algorithms Allowing Basis Deficiency
n interesting question for linear programming (LP) algorithms is how to deal with solutions in which the number of nonzero variables is less than the number of rows of the matrix in standard form. An approach is that of basis deficiency-allowing (BDA) simplex variations, which work with a subset of independent columns of the coefficient matrix in standard form, wherein the basis is not necessarily represented by a square matrix. We describe one such algorithm with several variants. The research question deals with studying the computational behaviour by using small, extreme cases. For these instances, we must wonder which parameter setting or variants are more appropriate. We compare the setting of two nonsimplex active-set methods with Holmström’s TomLab LpSimplex v3.0 commercial sparse primal simplex commercial implementation. All of them update a sparse QR factorization in Matlab. The first two implementations require fewer iterations and provide better solution quality and running time.This work has been funded by grant PID2021-123278OB-I00 from the Spanish Ministry of Science and Innovation. Partial funding for open access charge: Universidad de Málag
PLSS: A Projected Linear Systems Solver
We propose iterative projection methods for solving square or rectangular
consistent linear systems . Projection methods use sketching matrices
(possibly randomized) to generate a sequence of small projected subproblems,
but even the smaller systems can be costly. We develop a process that appends
one column each iteration to the sketching matrix and that converges in a
finite number of iterations independent of whether the sketch is random or
deterministic. In general, our process generates orthogonal updates to the
approximate solution . By choosing the sketch to be the set of all
previous residuals, we obtain a simple recursive update and convergence in at
most rank() iterations (in exact arithmetic). By choosing a sequence of
identity columns for the sketch, we develop a generalization of the Kaczmarz
method. In experiments on large sparse systems, our method (PLSS) with residual
sketches is competitive with LSQR, and our method with residual and identity
sketches compares favorably to state-of-the-art randomized methods
symODE2: Symbolic analysis of second-order ordinary differential equations with polynomial coefficients
An open-source package for symbolic analysis of second-order ordinary
differential equations with polynomial coefficients is proposed. The approach
is mainly based on the singularity structure of the equation and the routines
are written under the open-source computer algebra system SageMath. The code is
able to obtain the singularity structure, indices and recurrence relations
associated with the regular singular points, and symbolic solutions of the
hypergeometric equation, Heun equation, and their confluent forms.Comment: 25 pages, 2 figures. Comments are welcome. The codes can be obtained
from https://github.com/tbirkandan/symODE
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Euclidean Distance Matrices: Essential Theory, Algorithms and Applications
Euclidean distance matrices (EDM) are matrices of squared distances between
points. The definition is deceivingly simple: thanks to their many useful
properties they have found applications in psychometrics, crystallography,
machine learning, wireless sensor networks, acoustics, and more. Despite the
usefulness of EDMs, they seem to be insufficiently known in the signal
processing community. Our goal is to rectify this mishap in a concise tutorial.
We review the fundamental properties of EDMs, such as rank or
(non)definiteness. We show how various EDM properties can be used to design
algorithms for completing and denoising distance data. Along the way, we
demonstrate applications to microphone position calibration, ultrasound
tomography, room reconstruction from echoes and phase retrieval. By spelling
out the essential algorithms, we hope to fast-track the readers in applying
EDMs to their own problems. Matlab code for all the described algorithms, and
to generate the figures in the paper, is available online. Finally, we suggest
directions for further research.Comment: - 17 pages, 12 figures, to appear in IEEE Signal Processing Magazine
- change of title in the last revisio
- …