3,497 research outputs found

    Simplex basis function based sparse least squares support vector regression

    Get PDF
    In this paper, a novel sparse least squares support vector regression algorithm, referred to as LSSVR-SBF, is introduced which uses a new low rank kernel based on simplex basis function, which has a set of nonlinear parameters. It is shown that the proposed model can be represented as a sparse linear regression model based on simplex basis functions. We propose a fast algorithm for least squares support vector regression solution at the cost of O(N) by avoiding direct kernel matrix inversion. An iterative estimation algorithm has been proposed to optimize the nonlinear parameters associated with the simplex basis functions with the aim of minimizing model mean square errors using the gradient descent algorithm. The proposed fast least square solution and the gradient descent algorithm are alternatively applied. Finally it is shown that the model has a dual representation as a piecewise linear model with respect to the system input. Numerical experiments are carried out to demonstrate the effectiveness of the proposed approaches

    Level Set Methods for Stochastic Discontinuity Detection in Nonlinear Problems

    Full text link
    Stochastic physical problems governed by nonlinear conservation laws are challenging due to solution discontinuities in stochastic and physical space. In this paper, we present a level set method to track discontinuities in stochastic space by solving a Hamilton-Jacobi equation. By introducing a speed function that vanishes at discontinuities, the iso-zero of the level set problem coincide with the discontinuities of the conservation law. The level set problem is solved on a sequence of successively finer grids in stochastic space. The method is adaptive in the sense that costly evaluations of the conservation law of interest are only performed in the vicinity of the discontinuities during the refinement stage. In regions of stochastic space where the solution is smooth, a surrogate method replaces expensive evaluations of the conservation law. The proposed method is tested in conjunction with different sets of localized orthogonal basis functions on simplex elements, as well as frames based on piecewise polynomials conforming to the level set function. The performance of the proposed method is compared to existing adaptive multi-element generalized polynomial chaos methods

    Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms

    Full text link
    This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinate-wise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparse-simplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms are of coordinate descent type. The theoretical convergence of these methods and their relations to the derived optimality conditions are studied. The algorithms and results are illustrated by several numerical examples.Comment: submitted to SIAM Optimizatio

    Randomized Sketches of Convex Programs with Sharp Guarantees

    Full text link
    Random projection (RP) is a classical technique for reducing storage and computational costs. We analyze RP-based approximations of convex programs, in which the original optimization problem is approximated by the solution of a lower-dimensional problem. Such dimensionality reduction is essential in computation-limited settings, since the complexity of general convex programming can be quite high (e.g., cubic for quadratic programs, and substantially higher for semidefinite programs). In addition to computational savings, random projection is also useful for reducing memory usage, and has useful properties for privacy-sensitive optimization. We prove that the approximation ratio of this procedure can be bounded in terms of the geometry of constraint set. For a broad class of random projections, including those based on various sub-Gaussian distributions as well as randomized Hadamard and Fourier transforms, the data matrix defining the cost function can be projected down to the statistical dimension of the tangent cone of the constraints at the original solution, which is often substantially smaller than the original dimension. We illustrate consequences of our theory for various cases, including unconstrained and â„“1\ell_1-constrained least squares, support vector machines, low-rank matrix estimation, and discuss implications on privacy-sensitive optimization and some connections with de-noising and compressed sensing
    • …
    corecore