3,324 research outputs found

    Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

    Get PDF
    Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.Comment: The tensor decomposition analysis is expanded, and the analysis of ridge regression is added for recovering the parameters of last layer of neural networ

    Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

    Full text link
    We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth

    Comparison of Neural Networks and Least Mean Squared Algorithms for Active Noise Canceling

    Get PDF
    Active Noise Canceling (ANC) is the idea of using superposition to achieve cancellation of unwanted noise and is implemented for many applications such as attempting to reduce noise in a commercial airplane cabin. One of the main traditional techniques for noise cancellation is the adaptive least mean squares (LMS) algorithm that produces the anti-noise signal, or the 180 degree out-of-phase signal to cancel the noise via superposition. This work attempts to compare several neural network approaches against the traditional LMS algorithms. The noise signals that are used for the training of the network are from the Signal Processing Information Base (SPIB) database. The neural network architectures utilized in this paper include the Multilayer Feedforward Neural Network, the Recurrent Neural Network, the Long Short Term Neural Network, and the Convolutional Neural Network. These neural networks are trained to predict the anti-noise signal based on an incoming noise signal. The results of the simulation demonstrate successful ANC using neural networks, and they show that neural networks can yield better noise attenuation than LMS algorithms. Results show that the Convolutional Neural Network architecture outperforms the other architectures implemented and tested in this work

    Active disturbance cancellation in nonlinear dynamical systems using neural networks

    Get PDF
    A proposal for the use of a time delay CMAC neural network for disturbance cancellation in nonlinear dynamical systems is presented. Appropriate modifications to the CMAC training algorithm are derived which allow convergent adaptation for a variety of secondary signal paths. Analytical bounds on the maximum learning gain are presented which guarantee convergence of the algorithm and provide insight into the necessary reduction in learning gain as a function of the system parameters. Effectiveness of the algorithm is evaluated through mathematical analysis, simulation studies, and experimental application of the technique on an acoustic duct laboratory model

    Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    Full text link
    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later.Comment: Technical report for our submissions to the ILSVRC 2015 Scene Classification Challenge, where we won the first plac

    To go deep or wide in learning?

    Full text link
    To achieve acceptable performance for AI tasks, one can either use sophisticated feature extraction methods as the first layer in a two-layered supervised learning model, or learn the features directly using a deep (multi-layered) model. While the first approach is very problem-specific, the second approach has computational overheads in learning multiple layers and fine-tuning of the model. In this paper, we propose an approach called wide learning based on arc-cosine kernels, that learns a single layer of infinite width. We propose exact and inexact learning strategies for wide learning and show that wide learning with single layer outperforms single layer as well as deep architectures of finite width for some benchmark datasets.Comment: 9 pages, 1 figure, Accepted for publication in Seventeenth International Conference on Artificial Intelligence and Statistic
    corecore