4,941 research outputs found

    Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.

    Get PDF
    by Jones Chui.Thesis (M.Phil.)--Chinese University of Hong Kong, 1992.Includes bibliographical references (leaves 62-[66]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2Chapter 1.1.1 --- Processing Unit --- p.2Chapter 1.1.2 --- Topology --- p.3Chapter 1.1.3 --- Learning Rules --- p.4Chapter 1.2 --- Overview of Various Neural Network Models --- p.6Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8Chapter 1.3.1 --- Historical Development --- p.9Chapter 1.3.2 --- Some Intrinsic Problems --- p.9Chapter 1.4 --- Objective of the Thesis --- p.10Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13Chapter 2.1.1 --- Network Structure --- p.13Chapter 2.1.2 --- Learning Rules --- p.17Chapter 2.2 --- Construction of LowD RBF Networks --- p.19Chapter 2.2.1 --- Growing Heuristic --- p.19Chapter 2.2.2 --- Pruning Heuristic --- p.27Chapter 2.2.3 --- Summary --- p.31Chapter 3 --- Application examples --- p.34Chapter 3.1 --- Chaotic Time Series Prediction --- p.35Chapter 3.1.1 --- Performance Comparison --- p.39Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46Chapter 3.1.5 --- Overfitting Problem --- p.46Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59Chapter 4 --- Conclusion --- p.60Chapter 4.1 --- Discussions --- p.60Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61Bibliography --- p.6

    Entropy of Overcomplete Kernel Dictionaries

    Full text link
    In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. The quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.Comment: 10 page

    Speech and neural network dynamics

    Get PDF

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods

    Dynamical Functional Artificial Neural Network: Use of Efficient Piecewise Linear Functions

    Get PDF
    A nonlinear adaptive time series predictor has been developed using a new type of piecewise linear (PWL) network for its underlying model structure. The PWL Network is a D-FANN (Dynamical Functional Artificial Neural Network) the activation functions of which are piecewise linear. The new realization is presented with the associated training algorithm. Properties and characteristics are discussed. This network has been successfully used to model and predict an important class of highly dynamic and nonstationary signals, namely speech signals.Fil: Figueroa, Jose Luis. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Investigaciones en Ingeniería Eléctrica "Alfredo Desages". Universidad Nacional del Sur. Departamento de Ingeniería Eléctrica y de Computadoras. Instituto de Investigaciones en Ingeniería Eléctrica "Alfredo Desages"; ArgentinaFil: Cousseau, Juan Edmundo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Investigaciones en Ingeniería Eléctrica "Alfredo Desages". Universidad Nacional del Sur. Departamento de Ingeniería Eléctrica y de Computadoras. Instituto de Investigaciones en Ingeniería Eléctrica "Alfredo Desages"; Argentin

    Neural Network Configurations Analysis for Multilevel Speech Pattern Recognition System with Mixture of Experts

    Get PDF
    This chapter proposes to analyze two configurations of neural networks to compose the expert set in the development of a multilevel speech signal pattern recognition system of 30 commands in the Brazilian Portuguese language. Then, multilayer perceptron (MLP) and learning vector quantization (LVQ) networks have their performances verified during the training, validation and test stages in the speech signal recognition, whose patterns are given by two-dimensional time matrices, result from mel-cepstral coefficients coding by the discrete cosine transform (DCT). In order to avoid the pattern separability problem, the patterns are modified by a nonlinear transformation to a high-dimensional space through a suitable set of Gaussian radial base functions (GRBF). The performance of MLP and LVQ experts is improved and configurations are trained with few examples of each modified pattern. Several combinations were performed for the neural network topologies and algorithms previously established to determine the network structures with the best hit and generalization results
    • 

    corecore