19 research outputs found

    Linear Inverse Problems with Hessian-Schatten Total Variation

    Full text link
    In this paper, we characterize the class of extremal points of the unit ball of the Hessian-Schatten total variation (HTV) functional. The underlying motivation for our work stems from a general representer theorem that characterizes the solution set of regularized linear inverse problems in terms of the extremal points of the regularization ball. Our analysis is mainly based on studying the class of continuous and piecewise linear (CPWL) functions. In particular, we show that in dimension d=2d=2, CPWL functions are dense in the unit ball of the HTV functional. Moreover, we prove that a CPWL function is extremal if and only if its Hessian is minimally supported. For the converse, we prove that the density result (which we have only proven for dimension d=2d = 2) implies that the closure of the CPWL extreme points contains all extremal points

    Rank-One Matrix Completion with Automatic Rank Estimation via L1-Norm Regularization

    Get PDF
    Completing a matrix from a small subset of its entries, i.e., matrix completion is a challenging problem arising from many real-world applications, such as machine learning and computer vision. One popular approach to solve the matrix completion problem is based on low-rank decomposition/factorization. Low-rank matrix decomposition-based methods often require a prespecified rank, which is difficult to determine in practice. In this paper, we propose a novel low-rank decomposition-based matrix completion method with automatic rank estimation. Our method is based on rank-one approximation, where a matrix is represented as a weighted summation of a set of rank-one matrices. To automatically determine the rank of an incomplete matrix, we impose L1-norm regularization on the weight vector and simultaneously minimize the reconstruction error. After obtaining the rank, we further remove the L1-norm regularizer and refine recovery results. With a correctly estimated rank, we can obtain the optimal solution under certain conditions. Experimental results on both synthetic and real-world data demonstrate that the proposed method not only has good performance in rank estimation, but also achieves better recovery accuracy than competing methods

    Spectral Phase Transitions in Non-Linear Wigner Spiked Models

    Full text link
    We study the asymptotic behavior of the spectrum of a random matrix where a non-linearity is applied entry-wise to a Wigner matrix perturbed by a rank-one spike with independent and identically distributed entries. In this setting, we show that when the signal-to-noise ratio scale as N12(1−1/k⋆)N^{\frac{1}{2} (1-1/k_\star)}, where k⋆k_\star is the first non-zero generalized information coefficient of the function, the non-linear spike model effectively behaves as an equivalent spiked Wigner matrix, where the former spike before the non-linearity is now raised to a power k⋆k_\star. This allows us to study the phase transition of the leading eigenvalues, generalizing part of the work of Baik, Ben Arous and Pech\'e to these non-linear models.Comment: 27 page

    Multiplicatively Perturbed Least Squares for Dimension Reduction

    Get PDF
    Dimension reduction is a crucial aspect of modern data science, offering computational efficiency, insight into the structure of problems, and increased accuracy for downstream regression problems. According to a well-known result in approximation theory, the mean squared error of a non-parametric regression problem is not guaranteed to decrease faster than N^{-2p/(2p+D)}, where N is the number of samples, p a smoothness parameter of the problem, and D the dimension of the inputs. This slow rate is due to the so-called ``Curse of Dimensionality,'' in which samples in high-dimensional domains are exponentially likely to be well isolated from each other. These concerns motivate research into algorithms to determine intrinsic structure to the functions being regressed, as any reduction in D yields an exponential improvement in the lower bound of sample complexity. Even in parametric settings, large D increases computational complexity and hinders the ability to find useful parameter values. In this thesis, we discuss various existing methods of dimension reduction and introduce our own: Multiplicatively Perturbed Least Squares (MPLS). We provide a theoretical analysis of MPLS that proves it achieves the optimal convergence rate of N^{-1/2} for a broad class of functions, up to logarithmic factors. This theoretical analysis is supplemented by a series of experimental results, in which MPLS performs better or comparable to existing dimension reduction algorithms

    Essays on statistical economics with applications to financial market instability, limit distribution of loss aversion, and harmonic probability weighting functions

    Get PDF
    This dissertation is comprised of four essays. It develops statistical models of decision making in the presence of risk with applications to economics and finance. The methodology draws upon economics, finance, psychology, mathematics and statistics. Each essay contributes to the literature by either introducing new theories and empirical predictions or extending old ones with novel approaches .The first essay (Chapter II) includes, to the best of our knowledge, the first known limit distribution of the myopic loss aversion (MLA) index derived from micro-foundations of behavioural economics. That discovery predicts several new results. We prove that the MLA index is in the class of α-stable distributions. This striking prediction is upheld empirically with data from a published meta-study on loss aversion; published data on cross-country loss aversion indexes; and macroeconomic loss aversion index data for US and South Africa. The latter results provide contrast to Hofstede's cross-cultural uncertainty avoidance index for risk perception. We apply the theory to information based asset pricing and show how the MLA index mimics information flows in credit risk models. We embed the MLA index in the pricing kernel of a behavioural consumption based capital asset pricing model (B-CCAPM) and resolve the equity premium puzzle. Our theory predicts: (1) stochastic dominance of good states in the B-CCAPM Markov matrix induce excess volatility; and (2) a countercyclical fourfold pattern of risk attitudes. The second essay (Chapter III) introduces a probability model of "irrational exuberance "and financial market instability implied by index option prices. It is based on a behavioural empirical local Lyapunov exponent (BELLE) process we construct from micro-foundations of behavioural finance. It characterizes stochastic stability of financial markets, with risk attitude factors in fixed point neighbourhoods of the probability weighting functions implied by index option prices. It provides a robust early warning system for market crash across different credit risk sources. We show how the model would have predicted the Great Recession of 2008. The BELLE process characterizes Minskys financial instability hypothesis that financial markets transit from financial relations that make them stable to those that make them unstable

    Semiglobal optimal feedback stabilization of autonomous systems via deep neural network approximation

    Full text link
    A learning approach for optimal feedback gains for nonlinear continuous time control systems is proposed and analysed. The goal is to establish a rigorous framework for computing approximating optimal feedback gains using neural networks. The approach rests on two main ingredients. First, an optimal control formulation involving an ensemble of trajectories with 'control' variables given by the feedback gain functions. Second, an approximation to the feedback functions via realizations of neural networks. Based on universal approximation properties we prove the existence and convergence of optimal stabilizing neural network feedback controllers.Comment: 55 pages, 13 figure
    corecore