8,613 research outputs found
A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method
An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising
An application of a linear programing technique to nonlinear minimax problems
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete
Certification of Real Inequalities -- Templates and Sums of Squares
We consider the problem of certifying lower bounds for real-valued
multivariate transcendental functions. The functions we are dealing with are
nonlinear and involve semialgebraic operations as well as some transcendental
functions like , , , etc. Our general framework is to use
different approximation methods to relax the original problem into polynomial
optimization problems, which we solve by sparse sums of squares relaxations. In
particular, we combine the ideas of the maxplus estimators (originally
introduced in optimal control) and of the linear templates (originally
introduced in static analysis by abstract interpretation). The nonlinear
templates control the complexity of the semialgebraic relaxations at the price
of coarsening the maxplus approximations. In that way, we arrive at a new -
template based - certified global optimization method, which exploits both the
precision of sums of squares relaxations and the scalability of abstraction
methods. We analyze the performance of the method on problems from the global
optimization literature, as well as medium-size inequalities issued from the
Flyspeck project.Comment: 27 pages, 3 figures, 4 table
Fast Color Space Transformations Using Minimax Approximations
Color space transformations are frequently used in image processing,
graphics, and visualization applications. In many cases, these transformations
are complex nonlinear functions, which prohibits their use in time-critical
applications. In this paper, we present a new approach called Minimax
Approximations for Color-space Transformations (MACT).We demonstrate MACT on
three commonly used color space transformations. Extensive experiments on a
large and diverse image set and comparisons with well-known multidimensional
lookup table interpolation methods show that MACT achieves an excellent balance
among four criteria: ease of implementation, memory usage, accuracy, and
computational speed
Minimax Iterative Dynamic Game: Application to Nonlinear Robot Control Tasks
Multistage decision policies provide useful control strategies in
high-dimensional state spaces, particularly in complex control tasks. However,
they exhibit weak performance guarantees in the presence of disturbance, model
mismatch, or model uncertainties. This brittleness limits their use in
high-risk scenarios. We present how to quantify the sensitivity of such
policies in order to inform of their robustness capacity. We also propose a
minimax iterative dynamic game framework for designing robust policies in the
presence of disturbance/uncertainties. We test the quantification hypothesis on
a carefully designed deep neural network policy; we then pose a minimax
iterative dynamic game (iDG) framework for improving policy robustness in the
presence of adversarial disturbances. We evaluate our iDG framework on a
mecanum-wheeled robot, whose goal is to find a ocally robust optimal multistage
policy that achieve a given goal-reaching task. The algorithm is simple and
adaptable for designing meta-learning/deep policies that are robust against
disturbances, model mismatch, or model uncertainties, up to a disturbance
bound. Videos of the results are on the author's website,
http://ecs.utdallas.edu/~opo140030/iros18/iros2018.html, while the codes for
reproducing our experiments are on github,
https://github.com/lakehanne/youbot/tree/rilqg. A self-contained environment
for reproducing our results is on docker,
https://hub.docker.com/r/lakehanne/youbotbuntu14/Comment: 2018 International Conference on Intelligent Robots and System
Rational minimax approximation via adaptive barycentric representations
Computing rational minimax approximations can be very challenging when there
are singularities on or near the interval of approximation - precisely the case
where rational functions outperform polynomials by a landslide. We show that
far more robust algorithms than previously available can be developed by making
use of rational barycentric representations whose support points are chosen in
an adaptive fashion as the approximant is computed. Three variants of this
barycentric strategy are all shown to be powerful: (1) a classical Remez
algorithm, (2) a "AAA-Lawson" method of iteratively reweighted least-squares,
and (3) a differential correction algorithm. Our preferred combination,
implemented in the Chebfun MINIMAX code, is to use (2) in an initial phase and
then switch to (1) for generically quadratic convergence. By such methods we
can calculate approximations up to type (80, 80) of on in
standard 16-digit floating point arithmetic, a problem for which Varga, Ruttan,
and Carpenter required 200-digit extended precision.Comment: 29 pages, 11 figure
Exponentially convergent data assimilation algorithm for Navier-Stokes equations
The paper presents a new state estimation algorithm for a bilinear equation
representing the Fourier- Galerkin (FG) approximation of the Navier-Stokes (NS)
equations on a torus in R2. This state equation is subject to uncertain but
bounded noise in the input (Kolmogorov forcing) and initial conditions, and its
output is incomplete and contains bounded noise. The algorithm designs a
time-dependent gain such that the estimation error converges to zero
exponentially. The sufficient condition for the existence of the gain are
formulated in the form of algebraic Riccati equations. To demonstrate the
results we apply the proposed algorithm to the reconstruction a chaotic fluid
flow from incomplete and noisy data
A detectability criterion and data assimilation for non-linear differential equations
In this paper we propose a new sequential data assimilation method for
non-linear ordinary differential equations with compact state space. The method
is designed so that the Lyapunov exponents of the corresponding estimation
error dynamics are negative, i.e. the estimation error decays exponentially
fast. The latter is shown to be the case for generic regular flow maps if and
only if the observation matrix H satisfies detectability conditions: the rank
of H must be at least as great as the number of nonnegative Lyapunov exponents
of the underlying attractor. Numerical experiments illustrate the exponential
convergence of the method and the sharpness of the theory for the case of
Lorenz96 and Burgers equations with incomplete and noisy observations
Nearly optimal minimax estimator for high-dimensional sparse linear regression
We present estimators for a well studied statistical estimation problem: the
estimation for the linear regression model with soft sparsity constraints
( constraint with ) in the high-dimensional setting. We first
present a family of estimators, called the projected nearest neighbor estimator
and show, by using results from Convex Geometry, that such estimator is within
a logarithmic factor of the optimal for any design matrix. Then by utilizing a
semi-definite programming relaxation technique developed in [SIAM J. Comput. 36
(2007) 1764-1776], we obtain an approximation algorithm for computing the
minimax risk for any such estimation task and also a polynomial time nearly
optimal estimator for the important case of sparsity constraint. Such
results were only known before for special cases, despite decades of studies on
this problem. We also extend the method to the adaptive case when the parameter
radius is unknown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1141 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- âŠ