139 research outputs found

    Robust Orthogonal Complement Principal Component Analysis

    Full text link
    Recently, the robustification of principal component analysis has attracted lots of attention from statisticians, engineers and computer scientists. In this work we study the type of outliers that are not necessarily apparent in the original observation space but can seriously affect the principal subspace estimation. Based on a mathematical formulation of such transformed outliers, a novel robust orthogonal complement principal component analysis (ROC-PCA) is proposed. The framework combines the popular sparsity-enforcing and low rank regularization techniques to deal with row-wise outliers as well as element-wise outliers. A non-asymptotic oracle inequality guarantees the accuracy and high breakdown performance of ROC-PCA in finite samples. To tackle the computational challenges, an efficient algorithm is developed on the basis of Stiefel manifold optimization and iterative thresholding. Furthermore, a batch variant is proposed to significantly reduce the cost in ultra high dimensions. The paper also points out a pitfall of a common practice of SVD reduction in robust PCA. Experiments show the effectiveness and efficiency of ROC-PCA in both synthetic and real data

    Computational burden reduction in Min-Max MPC

    Get PDF
    Min–max model predictive control (MMMPC) is one of the strategies used to control plants subject to bounded uncertainties. The implementation of MMMPC suffers a large computational burden due to the complex numerical optimization problem that has to be solved at every sampling time. This paper shows how to overcome this by transforming the original problem into a reduced min–max problem whose solution is much simpler. In this way, the range of processes to which MMMPC can be applied is considerably broadened. Proofs based on the properties of the cost function and simulation examples are given in the paper

    Applications of non-convex optimization in portfolio selection

    Get PDF
    Die vorgelegte Arbeit befasst sich mit nicht-konvexer Optimierung in dem Gebiet der Portfolio Selection. Thematisch lĂ€sst sich die Arbeit in zwei Teilgebiete strukturieren: (1) Das Lösen von Mean-Risk Problemen mit Value-at-Risk als Risikomaß: Es werden Methoden zum Auffinden von effizienten Portfolios fĂŒr den Fall von diskret verteilten Asset Returns vorgestellt. Die behandelten Probleme sind (wegen der Nicht-KonvexitĂ€t des Value-at-Risk) nicht konvex und lassen sich als Differenz von konvexen Funktionen darstellen. Es werden sowohl Branch-and-Bound als auch approximative Lösungsverfahren angewandt. Die globalen Lösungen des Branch-and-Bound werden mit den Lösungen der approximativen Verfahren verglichen. (2) Robustifizierung von Portfolio-Selection Problemen: In den letzten Jahren gibt es in der Literatur verstĂ€rkt BemĂŒhungen Optimierungsprobleme bezĂŒglich Unsicherheiten in den Parametern zu robustifizieren. Robustifizierte Lösungen haben die Eigenschaft, dass moderate Variationen von Parametern nicht zu dramatischen Verschlechterungen der Lösungen fĂŒhren. Im Rahmen der robusten Portfolio Optimierung geht es hauptsĂ€chlich darum, Lösungen in Bezug auf Abweichungen in den Verteilungen der Gewinne der verwendeten Finanzinstrumente zu kontrollieren. In der gegenstĂ€ndlichen Arbeit werden mit Hilfe von Wahrscheinlichkeitsmetriken sogenannte Ambiguity Mengen definiert, welche alle Verteilungen enthalten, die aufgrund der Datenlage als mögliche Verteilungen in Frage kommen. Die verwendete Metrik, die sogenannte Kantorovich (Wasserstein) Metrik, ermöglicht es mittels Ergebnissen der nichtparametrischen Statistik, die Ambiguity Mengen als Konfidenzmengen um die empirischen VerteilungschĂ€tzer zu interpretieren. Mittels der beschriebenen Methoden werden Mean-Risk Probleme robustifiziert. Diese Probleme sind zunĂ€chst infinit und werden in einem weiteren Schritt zu nicht konvexen semi-definiten Problemen umformuliert. Die Lösung dieser Probleme basiert einerseits auf einem Algortihmus zum Lösen von semi-definiten Problemen mit unendlich vielen Nebenbedingungen und andererseits auf Methoden zum approximativen Lösen von nicht konvexen Problemen (dem sogenannten Difference of Convex Algorithm).The thesis is concerned with application of non-convex programming to problems of portfolio optimization in a single stage stochastic optimization framework. In particular two different classes of portfolio selection problems are investigated. In both the problems a scenario based approach to modeling uncertainty is pursued, i.e. the randomness in the models is always described by finitely many joint realizations of the asset returns. The thesis is structured into three chapters briefly outlined below: (1) A D.C. Formulation of Value-at-Risk constrained Optimization: In this Chapter the aim is to solve mean risk models with the Value-at-Risk as a risk measure. In the case of finitely supported return distributions, it is shown that the Value-at-Risk can be written as a D.C. function and the mentioned mean risk problem therefore corresponds to a D.C. problem. The non-convex problem of optimizing the Value at Risk is rather extensively treated in the literature and there are various approximative solution techniques as well as some approaches to solve the problem globally. The reformulation as D.C. problem provides an insight into the structure of the problem, which can be exploited to devise a Branch-and-Bound algorithm for finding global solutions for small to medium sized instances. The possibility of refining epsilon-optimal solutions obtained from the Branch-and-Bound framework via local search heuristics is also discussed in this Chapter. (2) Value-at-Risk constrained optimization using the DCA: In this part of the thesis the Value-at-Risk problem is once again investigated with the aim of solving problems of realistic sizes in relatively short time. Since the Value at Risk optimization can be shown to be a NP hard problem, this can only be achieved by sacrificing on the guaranteed globality of the solutions. Therefore a local solution technique for unconstrained D.C. problems called Difference of Convex Algorithm (DCA) is employed. To solve the problem a new variant of the DCA the so called 'hybrid DCA' is proposed, which preserves the favorable convergence properties of the computationally hard 'complete DCA' as well as the computational tractability of the so called 'simple DCA'. The results are tested for small problems and the solutions are shown to actually coincide with the global optima obtained with the Branch-and-Bound algorithm in most of the cases. For realistic problem sizes the proposed method is shown to consistently outperform known heuristic approximations implemented in commercial software. (3) A Framework for Optimization under Ambiguity: The last part of the thesis is devoted to a different topic which received much attention in the recent stochastic programming literature: the topic of robust optimization. More specifically the aim is to robustify single stage stochastic optimization models with respect to uncertainty about the distributions of the random variables involved in the formulation of the stochastic program. The aim is to explore ways of explicitly taking into account ambiguity about the distributions when finding a decision while imposing only very weak restrictions on possible probability models that are taken into consideration. Ambiguity is defined as possible deviation from a discrete reference measure Q (in this work the empirical measure). To this end a so called ambiguity set B, that contains all the measures that can reasonably be assumed to be the real measure P given the available data, is defined. Since the idea is to devise a general approach not restricted by assuming P to be an element of any specific parametric family, we define our ambiguity sets by the use of general probability metrics. Relative to these measures a worst case approach is adopted to robustify the problem with respect to B. The resulting optimization problems turn out to be infinite and are reduced to non-convex semi-definite problems. In the last part of the paper we show how to solve these problems numerically for the example of a mean risk portfolio selection problem with Expected Shortfall under a Threshold as the risk measure. The DCA in combination with an iterative algorithm to approximate the infinite set of constraints by finitely many ones is used to obtain numerical solutions to the problem

    Robust filtering for bilinear uncertain stochastic discrete-time systems

    Get PDF
    Copyright [2002] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper deals with the robust filtering problem for uncertain bilinear stochastic discrete-time systems with estimation error variance constraints. The uncertainties are allowed to be norm-bounded and enter into both the state and measurement matrices. We focus on the design of linear filters, such that for all admissible parameter uncertainties, the error state of the bilinear stochastic system is mean square bounded, and the steady-state variance of the estimation error of each state is not more than the individual prespecified value. It is shown that the design of the robust filters can be carried out by solving some algebraic quadratic matrix inequalities. In particular, we establish both the existence conditions and the explicit expression of desired robust filters. A numerical example is included to show the applicability of the present method

    Robust Learning for Smoothed Online Convex Optimization with Feedback Delay

    Full text link
    We study a challenging form of Smoothed Online Convex Optimization, a.k.a. SOCO, including multi-step nonlinear switching costs and feedback delay. We propose a novel machine learning (ML) augmented online algorithm, Robustness-Constrained Learning (RCL), which combines untrusted ML predictions with a trusted expert online algorithm via constrained projection to robustify the ML prediction. Specifically,we prove that RCL is able to guarantee(1+λ)(1+\lambda)-competitiveness against any given expert for anyλ>0\lambda>0, while also explicitly training the ML model in a robustification-aware manner to improve the average-case performance. Importantly,RCL is the first ML-augmented algorithm with a provable robustness guarantee in the case of multi-step switching cost and feedback delay.We demonstrate the improvement of RCL in both robustness and average performance using battery management for electrifying transportationas a case study.Comment: Accepted by NeurIPS 202

    Numerical Methods of Optimum Experimental Design Based on a Second-Order Approximation of Confidence Regions

    Get PDF
    A successful application of model-based simulation and optimization of dynamic processes requires an exact calibration of the underlying mathematical models. Here, a fundamental task is the estimation of unknown and nature given model coefficients by means of real observations. After an appropriate numerical treatment of the differential systems, the parameters can be estimated as the solution of a finite dimensional nonlinear constrained parameter estimation problem. Due to the fact that the measurements always contain defects, the resulting parameter estimate cannot be seen as an ultimate solution and a sensitivity analysis is required to quantify the statistical accuracy. The goal of the design of optimal experiments is the identification of those measurement times and experimental conditions which allow a parameter estimate with a maximized statistical accuracy. Also the design of optimal experiments problem can be formulated as an optimization problem, where the objective function is given by a suitable quality criterion based on the sensitivity analysis of the parameter estimation problem. In this thesis, we develop a quadratic sensitivity analysis to enable a better assessment of the statistical accuracy of a parameter estimate in the case of highly nonlinear model functions. The newly introduced sensitivity analysis is based on a quadratically approximated confidence region which is an expansion of the commonly used linearized confidence region. The quadratically approximated confidence region is analyzed extensively and adequate bounds are established. It is shown that exact bounds of the quadratic components can be obtained by solving symmetric eigenvalue problems. One main result of this thesis is that the quadratic part is essentially bounded by two Lipschitz constants, which also characterize the Gauss-Newton convergence properties. This bound can also be used for an approximation error of the validity of the linearized confidence regions. Furthermore, we compute a quadratic approximation of the covariance matrix, which delivers another possibility for the statistical assessment of the solution of a parameter estimation problem. The good approximation properties of the newly introduced sensitivity analysis are illustrated in several numerical examples. In order to robustify the design of optimal experiments, we develop a new objective function - the Q-criterion - based on the introduced sensitivity analysis. Next to the trace of the linear approximation of the covariance matrix, the Q-criterion consists of the above-mentioned Lipschitz constants. Here, we especially focus on a numerical computation of an adequate approximation of the constants. The robustness properties of the new objective function in terms of parameter uncertainties is investigated and compared to a worst-case formulation of the design of optimal experiments problem. It is revealed that the Q-criterion covers the worst-case approach of the design of optimal experiments problem based on the A-criterion. Moreover, the properties of the new objective function are considered in several examples. Here, it becomes evident that the Q-criterion leads to a drastic improve of the Gauss-Newton convergence rate at the following parameter estimation. Furthermore, in this thesis we consider efficient and numerically stable methods of parameter estimation and the design of optimal experiments for the treatment of multiple experiment parameter estimation problems. In terms of parameter estimation and sensitivity analysis, we propose a parallel computation of the Gauss-Newton increments and the covariance matrix based on orthogonal decompositions. Concerning the design of optimal experiments, we develop a parallel approach to compute the trace of the covariance matrix and its derivative
    • 

    corecore