255,475 research outputs found

    Shape Parameter Estimation

    Get PDF
    Performance of machine learning approaches depends strongly on the choice of misfit penalty, and correct choice of penalty parameters, such as the threshold of the Huber function. These parameters are typically chosen using expert knowledge, cross-validation, or black-box optimization, which are time consuming for large-scale applications. We present a principled, data-driven approach to simultaneously learn the model pa- rameters and the misfit penalty parameters. We discuss theoretical properties of these joint inference problems, and develop algorithms for their solution. We show synthetic examples of automatic parameter tuning for piecewise linear-quadratic (PLQ) penalties, and use the approach to develop a self-tuning robust PCA formulation for background separation.Comment: 20 pages, 10 figure

    Discrete Adaptive Second Order Sliding Mode Controller Design with Application to Automotive Control Systems with Model Uncertainties

    Full text link
    Sliding mode control (SMC) is a robust and computationally efficient solution for tracking control problems of highly nonlinear systems with a great deal of uncertainty. High frequency oscillations due to chattering phenomena and sensitivity to data sampling imprecisions limit the digital implementation of conventional first order continuous-time SMC. Higher order discrete SMC is an effective solution to reduce the chattering during the controller software implementation, and also overcome imprecisions due to data sampling. In this paper, a new adaptive second order discrete sliding mode control (DSMC) formulation is presented to mitigate data sampling imprecisions and uncertainties within the modeled plant's dynamics. The adaptation mechanism is derived based on a Lyapunov stability argument which guarantees asymptotic stability of the closed-loop system. The proposed controller is designed and tested on a highly nonlinear combustion engine tracking control problem. The simulation test results show that the second order DSMC can improve the tracking performance up to 80% compared to a first order DSMC under sampling and model uncertainties.Comment: 6 pages, 6 figures, 2017 American Control Conferenc

    Metaheuristics for black-box robust optimisation problems

    Get PDF
    Our interest is in the development of algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited. When a desired solution cannot be implemented exactly (implementation uncertainty) the aim is to find a robust one. Here that is to find a point in the decision variable space such that the worst solution from within an uncertainty region around that point still performs well. This thesis comprises three research papers. One has been published, one accepted for publication, and one submitted for publication. We initially develop a single-solution based approach, largest empty hypersphere (LEH), which identifies poor performing points in the decision variable space and repeatedly moves to the centre of the region devoid of all such points. Building on this we develop population based approaches using a particle swarm optimisation (PSO) framework. This combines elements of the LEH approach, a local descent directions (d.d.) approach for robust problems, and a series of novel features. Finally we employ an automatic generation of algorithms technique, genetic programming (GP), to evolve a population of PSO based heuristics for robust problems. We generate algorithmic sub-components, the design rules by which they are combined to form complete heuristics, and an evolutionary GP framework. The best performing heuristics are identified. With the development of each heuristic we perform experimental testing against comparator approaches on a suite of robust test problems of dimension between 2D and 100D. Performance is shown to improve with each new heuristic. Furthermore the generation of large numbers of heuristics in the GP process enables an assessment of the best performing sub-components. This can be used to indicate the desirable features of an effective heuristic for tackling the problem under consideration. Good performance is observed for the following characteristics: inner maximisation by random sampling, a small number of inner points, particle level stopping conditions, a small swarm size, a Global topology, and particle movement using a baseline inertia formulation augmented by LEH and d.d. capabilities

    Active Sampling for Min-Max Fairness

    Full text link
    We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization. The key intuition behind our approach is to use at each timestep a datapoint from the group that is worst off under the current model for updating the model. The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups. For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution

    Goal-oriented adaptivity for a conforming residual minimization method in a dual discontinuous Galerkin norm

    Get PDF
    We propose a goal-oriented mesh-adaptive algorithm for a finite element method stabilized via residual minimization on dual discontinuous-Galerkin norms. By solving a saddle-point problem, this residual minimization delivers a stable continuous approximation to the solution on each mesh instance and a residual projection onto a broken polynomial space, which is a robust error estimator to minimize the discrete energy norm via automatic mesh refinement. In this work, we propose and analyze a goal-oriented adaptive algorithm for this stable residual minimization. We solve the primal and adjoint problems considering the same saddle-point formulation and different right-hand sides. By solving a third stable problem, we obtain two efficient error estimates to guide goal oriented adaptivity. We illustrate the performance of this goal-oriented adaptive strategy on advection-diffusion reaction problems
    • …
    corecore