807 research outputs found

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Neural network methods for one-to-many multi-valued mapping problems

    Get PDF
    An investigation of the applicability of neural network-based methods in predicting the values of multiple parameters, given the value of a single parameter within a particular problem domain is presented. In this context, the input parameter may be an important source of variation that is related with a complex mapping function to the remaining sources of variation within a multivariate distribution. The definition of the relationship between the variables of a multivariate distribution and a single source of variation allows the estimation of the values of multiple variables given the value of the single variable, addressing in that way an ill-conditioned one-to-many mapping problem. As part of our investigation, two problem domains are considered: predicting the values of individual stock shares, given the value of the general index, and predicting the grades received by high school pupils, given the grade for a single course or the average grade. With our work, the performance of standard neural network-based methods and in particular multilayer perceptrons (MLPs), radial basis functions (RBFs), mixture density networks (MDNs) and a latent variable method, the general topographic mapping (GTM), is compared. According to the results, MLPs and RBFs outperform MDNs and the GTM for these one-to-many mapping problems

    Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

    Full text link
    In this paper we propose and investigate a novel nonlinear unit, called LpL_p unit, for deep neural networks. The proposed LpL_p unit receives signals from several projections of a subset of units in the layer below and computes a normalized LpL_p norm. We notice two interesting interpretations of the LpL_p unit. First, the proposed unit can be understood as a generalization of a number of conventional pooling operators such as average, root-mean-square and max pooling widely used in, for instance, convolutional neural networks (CNN), HMAX models and neocognitrons. Furthermore, the LpL_p unit is, to a certain degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013) which achieved the state-of-the-art object recognition results on a number of benchmark datasets. Secondly, we provide a geometrical interpretation of the activation function based on which we argue that the LpL_p unit is more efficient at representing complex, nonlinear separating boundaries. Each LpL_p unit defines a superelliptic boundary, with its exact shape defined by the order pp. We claim that this makes it possible to model arbitrarily shaped, curved boundaries more efficiently by combining a few LpL_p units of different orders. This insight justifies the need for learning different orders for each unit in the model. We empirically evaluate the proposed LpL_p units on a number of datasets and show that multilayer perceptrons (MLP) consisting of the LpL_p units achieve the state-of-the-art results on a number of benchmark datasets. Furthermore, we evaluate the proposed LpL_p unit on the recently proposed deep recurrent neural networks (RNN).Comment: ECML/PKDD 201

    Isolating Stock Prices Variation with Neural Networks

    Get PDF
    In this study we aim to define a mapping function that relates the general index value among a set of shares to the prices of individual shares. In more general terms this is problem of defining the relationship between multivariate data distributions and a specific source of variation within these distributions where the source of variation in question represents a quantity of interest related to a particular problem domain. In this respect we aim to learn a complex mapping function that can be used for mapping different values of the quantity of interest to typical novel samples of the distribution. In our investigation we compare the performance of standard neural network based methods like Multilayer Perceptrons (MLPs) and Radial Basis Functions (RBFs) as well as Mixture Density Networks (MDNs) and a latent variable method, the General Topographic Mapping (GTM). As a reference benchmark of the prediction accuracy we consider a simple method based on the average values over certain intervals of the quantity of interest that we are trying to isolate (the so called Sample Average (SA) method). According to the results, MLPs and RBFs outperform MDNs and the GTM for this one-to-many mapping problem

    Signal processing algorithms for digital hearing aids

    Get PDF
    Hearing loss is a problem that severely affects the speech communication and disqualify most hearing-impaired people from holding a normal life. Although the vast majority of hearing loss cases could be corrected by using hearing aids, however, only a scarce of hearing-impaired people who could be benefited from hearing aids purchase one. This irregular use of hearing aids arises from the existence of a problem that, to date, has not been solved effectively and comfortably: the automatic adaptation of the hearing aid to the changing acoustic environment that surrounds its user. There are two approaches aiming to comply with it. On the one hand, the "manual" approach, in which the user has to identify the acoustic situation and choose the adequate amplification program has been found to be very uncomfortable. The second approach requires to include an automatic program selection within the hearing aid. This latter approach is deemed very useful by most hearing aid users, even if its performance is not completely perfect. Although the necessity of the aforementioned sound classification system seems to be clear, its implementation is a very difficult matter. The development of an automatic sound classification system in a digital hearing aid is a challenging goal because of the inherent limitations of the Digital Signal Processor (DSP) the hearing aid is based on. The underlying reason is that most digital hearing aids have very strong constraints in terms of computational capacity, memory and battery, which seriously limit the implementation of advanced algorithms in them. With this in mind, this thesis focuses on the design and implementation of a prototype for a digital hearing aid able to automatically classify the acoustic environments hearing aid users daily face on and select the amplification program that is best adapted to such environment aiming at enhancing the speech intelligibility perceived by the user. The most important contribution of this thesis is the implementation of a prototype for a digital hearing aid that automatically classifies the acoustic environment surrounding its user and selects the most appropriate amplification program for such environment, aiming at enhancing the sound quality perceived by the user. The battery life of this hearing aid is 140 hours, which has been found to be very similar to that of hearing aids in the market, and what is of key importance, there is still about 30% of the DSP resources available for implementing other algorithms

    Dynamic non-linear system modelling using wavelet-based soft computing techniques

    Get PDF
    The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique

    Neural Networks

    Get PDF
    We present an overview of current research on artificial neural networks, emphasizing a statistical perspective. We view neural networks as parameterized graphs that make probabilistic assumptions about data, and view learning algorithms as methods for finding parameter values that look probable in the light of the data. We discuss basic issues in representation and learning, and treat some of the practical issues that arise in fitting networks to data. We also discuss links between neural networks and the general formalism of graphical models

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications
    corecore