11 research outputs found

    Globally optimal univariate spline approximations

    Get PDF

    Algorithms for Nonconvex Optimization Problems in Machine Learning and Statistics

    Get PDF
    The purpose of this thesis is the design of algorithms that can be used to determine optimal solutions to nonconvex data approximation problems. In Part I of this thesis, we consider a very general class of nonconvex and large-scale data approximation problems and devise an algorithm that efficiently computes locally optimal solutions to these problems. As a type of trust-region Newton-CG method, the algorithm can make use of directions of negative curvature to escape saddle points, which otherwise might slow down the optimization process when solving nonconvex problems. We present results of numerical experiments on convex and nonconvex problems which support our claim that our algorithm has significant advantages compared to methods like stochastic gradient descent and its variance-reduced versions. In Part II we consider the univariate least-squares spline approximation problem with free knots, which is known to possess a large number of locally minimal points far from the globally optimal solution. Since in typical applications, neither the dimension of the decision variable nor the number of data points is particularly large, it is possible to make use of the specific problem structure in order to devise algorithmic approaches to approximate the globally optimal solution of problem instances of relevant sizes. We propose to approximate the continuous original problem with a combinatorial optimization problem, and investigate two algorithmic approaches for the computation of the optimal solution of the latter

    Demand response in a market environment

    Get PDF

    Confidence Region and Intervals for Sparse Penalized Regression Using Variational Inequality Techniques

    Get PDF
    With the abundance of large data, sparse penalized regression techniques are commonly used in data analysis due to the advantage of simultaneous variable selection and prediction. By introducing biases on the estimators, sparse penalized regression methods can often select a simpler model than unpenalized regression. A number of convex as well as non-convex penalties have been proposed in the literature to achieve sparsity. Despite intense work in this area, it remains unclear on how to perform valid inference for sparse penalized regression with a general penalty. In this work, by making use of state-of-the-art optimization tools in variational inequality theory, we propose a unified framework to construct confidence intervals for sparse penalized regression with a wide range of penalties, including the well-known least absolute shrinkage and selection operator (LASSO) penalty and the minimax concave penalty (MCP). We study the inference for two types of parameters: the parameters under the population version of the penalized regression and the parameters in the underlying linear model. Theoretical convergence properties of the proposed methods are obtained. Simulated and real data examples are presented to demonstrate the validity and effectiveness of the proposed inference procedure.Doctor of Philosoph
    corecore