1,529 research outputs found

    A Generalized Lagrange Multiplier Method for Support Vector Regression with Imposed Symmetry

    Get PDF
    This thesis presents an approach to support vector regression that extends the classic Vapnik’s formulation. After recalling that the classic formulation contains a Lasso regularization structure in its dual form, we propose a generalized Lagrangian function with additional terms to include the Ridge regularization in the dual problem for the case with symmetry. By including both regularization methods, the resulting dual problem with the generalized Lagrangian comprises an elastic net regularization structure. Hence, as an immediate consequence, the classical formulation is a particular case of the current proposal. Finally, to demonstrate the capabilities of this approach, the document includes examples of predicting some benchmark problems

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-based Optimization (Revision of 2005-81)

    Get PDF
    This paper studies simulation-based optimization with multiple outputs. It assumes that the simulation model has one random objective function and must satisfy given constraints on the other random outputs. It presents a statistical procedure for test- ing whether a specific input combination (proposed by some optimization heuristic) satisfies the Karush-Kuhn-Tucker (KKT) first-order optimality conditions. The pa- per focuses on "expensive" simulations, which have small sample sizes. The paper applies the classic t test to check whether the specific input combination is feasi- ble, and whether any constraints are binding; it applies bootstrapping (resampling) to test the estimated gradients in the KKT conditions. The new methodology is applied to three examples, which gives encouraging empirical results.Stopping rule;metaheuristics;response surface methodology;design of experiments

    Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning

    Full text link
    Many problems in machine learning and other fields can be (re)for-mulated as linearly constrained separable convex programs. In most of the cases, there are multiple blocks of variables. However, the traditional alternating direction method (ADM) and its linearized version (LADM, obtained by linearizing the quadratic penalty term) are for the two-block case and cannot be naively generalized to solve the multi-block case. So there is great demand on extending the ADM based methods for the multi-block case. In this paper, we propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently. When all the component objective functions have bounded subgradients, we obtain convergence results that are stronger than those of ADM and LADM, e.g., allowing the penalty parameter to be unbounded and proving the sufficient and necessary conditions} for global convergence. We further propose a simple optimality measure and reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with extra convex set constraints, with refined parameter estimation we devise a practical version of LADMPSAP for faster convergence. Finally, we generalize LADMPSAP to handle programs with more difficult objective functions by linearizing part of the objective function as well. LADMPSAP is particularly suitable for sparse representation and low-rank recovery problems because its subproblems have closed form solutions and the sparsity and low-rankness of the iterates can be preserved during the iteration. It is also highly parallelizable and hence fits for parallel or distributed computing. Numerical experiments testify to the advantages of LADMPSAP in speed and numerical accuracy.Comment: Preliminary version published on Asian Conference on Machine Learning 201

    A Generalized Lagrange Multiplier Method Support for Vector Regression Based

    Get PDF
    This research presents an approach to support vector regression based on the epsilon L1 and L2 formulations. In contrast to standard architectures, it explores a new formulation where the dual optimization problem results from formulating an extended Lagrangian function, introducing additional terms to include a weighted elastic net regularization structure. Additionally, the research shows the differences and similarities of this proposal with the classical support vector regression and the LASSO regression, aiming to compare them with standard models. To demonstrate the capabilities of this approach, the document includes examples of predicting some benchmark functions

    Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications

    Full text link
    Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.Comment: Accepted in Transactions on Pattern Analysis and Machine Intelligence (TPAMI). To appea
    corecore