70,581 research outputs found

    Nonlinear Sampling Theory and Efficient Signal Recovery

    Get PDF
    Sampling theory investigates signal recovery from its partial information, and one of the simplest and most well-known sampling schemes is uniform linear sampling, characterized by the celebrated classical sampling theorem. However, the requirements of uniform linear sampling may not always be satisfied, sparking the need for more general sampling theories. In the thesis, we discuss the following three sampling scenarios: signal quantization, compressive sensing, and deep neural networks. In signal quantization theory, the inability of digital devices to perfectly store analog samples leads to distortion when reconstructing the signal from its samples. Different quantization schemes are proposed so as to minimize such distortion. We adapt a quantization scheme used in analog-to-digital conversion called signal decimation to finite dimensional signals. In doing so, we are able to achieve theoretically optimal reconstruction error decay rate. Compressive sensing investigates the possibility to recover high-dimensional signals from incomplete samples. It has been proven feasible as long as the signal is sufficiently sparse. To this point, all of the most successful examples follow from random constructions rather than deterministic ones. Whereas the sparsity of the signal can be almost as large as the ambient dimension for random constructions, current deterministic constructions require the sparsity to be at most the square-root of the ambient dimension. This apparent barrier is the well-known square-root bottleneck. In this thesis, we propose a new explicit sampling scheme as a possible candidate for deterministic compressive sensing. We present a partial result, while the full generality is still work in progress. For deep neural networks, one approximates signals with neural networks. To do so, many samples need to be drawn in order to find an optimal approximating neural network. A common approach is to employ stochastic gradient descent, but it is unclear if the resulting neural network is indeed optimal due to the non-convexity of the optimization scheme. We follow an alternative approach, utilizing the derivatives of the signal for stable reconstruction. In this thesis, we focus on non-smooth signals, and using weak differentiation, it is easy to obtain stable reconstruction for one-layer neural networks. We are currently working on the two-layer case, and our approach is outlined in this thesis

    Nonparametric Weight Initialization of Neural Networks via Integral Representation

    Full text link
    A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.Comment: For ICLR2014, revised into 9 pages; revised into 12 pages (with supplements

    Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

    Full text link
    While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points. These issues are particularly acute in highly non-convex settings such as those arising in neural networks. Motivated by this, there has been recent interest in second-order methods that aim to alleviate these shortcomings by capturing curvature information. In this paper, we report detailed empirical evaluations of a class of Newton-type methods, namely sub-sampled variants of trust region (TR) and adaptive regularization with cubics (ARC) algorithms, for non-convex ML problems. In doing so, we demonstrate that these methods not only can be computationally competitive with hand-tuned SGD with momentum, obtaining comparable or better generalization performance, but also they are highly robust to hyper-parameter settings. Further, in contrast to SGD with momentum, we show that the manner in which these Newton-type methods employ curvature information allows them to seamlessly escape flat regions and saddle points.Comment: 21 pages, 11 figures. Restructure the paper and add experiment
    • …
    corecore