1,678 research outputs found

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage

    Supervised ANN vs. unsupervised SOM to classify EEG data for BCI: why can GMDH do better?

    Get PDF
    Construction of a system for measuring the brain activity (electroencephalogram (EEG)) and recognising thinking patterns comprises significant challenges, in addition to the noise and distortion present in any measuring technique. One of the most major applications of measuring and understanding EGG is the brain-computer interface (BCI) technology. In this paper, ANNs (feedforward back -prop and Self Organising Maps) for EEG data classification will be implemented and compared to abductive-based networks, namely GMDH (Group Methods of Data Handling) to show how GMDH can optimally (i.e. noise and accuracy) classify a given set of BCI’s EEG signals. It is shown that GMDH provides such improvements. In this endeavour, EGG classification based on GMDH will be researched for comprehensible classification without scarifying accuracy. GMDH is suggested to be used to optimally classify a given set of BCI’s EEG signals. The other areas related to BCI will also be addressed yet within the context of this purpose

    Bayesian interpolation

    Get PDF
    Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling

    Learning feedforward controller for a mobile robot vehicle

    Get PDF
    This paper describes the design and realisation of an on-line learning posetracking controller for a three-wheeled mobile robot vehicle. The controller consists of two components. The first is a constant-gain feedback component, designed on the basis of a second-order model. The second is a learning feedforward component, containing a single-layer neural network, that generates a control contribution on the basis of the desired trajectory of the vehicle. The neural network uses B-spline basis functions, enabling a computationally fast implementation and fast learning. The resulting control system is able to correct for errors due to parameter mismatches and classes of structural errors in the model used for the controller design. After sufficient learning, an existing static gain controller designed on the basis of an extensive model has been outperformed in terms of tracking accuracy

    Learning Real and Boolean Functions: When Is Deep Better Than Shallow

    Get PDF
    We describe computational tasks - especially in vision - that correspond to compositional/hierarchical functions. While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower VC-dimension as well as the number of training parameters. This leads to the question of approximation by sparse polynomials (in the number of independent parameters) and, as a consequence, by deep networks. We also discuss connections between our results and learnability of sparse Boolean functions, settling an old conjecture by Bengio.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. HNM was supported in part by ARO Grant W911NF-15-1-0385
    • …
    corecore