1,754 research outputs found

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    A machine learning approach for efficient uncertainty quantification using multiscale methods

    Get PDF
    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.Comment: Journal of Computational Physics (2017

    Neural Network Modelling of Constrained Spatial Interaction Flows

    Get PDF
    Fundamental to regional science is the subject of spatial interaction. GeoComputation - a new research paradigm that represents the convergence of the disciplines of computer science, geographic information science, mathematics and statistics - has brought many scholars back to spatial interaction modeling. Neural spatial interaction modeling represents a clear break with traditional methods used for explicating spatial interaction. Neural spatial interaction models are termed neural in the sense that they are based on neurocomputing. They are clearly related to conventional unconstrained spatial interaction models of the gravity type, and under commonly met conditions they can be understood as a special class of general feedforward neural network models with a single hidden layer and sigmoidal transfer functions (Fischer 1998). These models have been used to model journey-to-work flows and telecommunications traffic (Fischer and Gopal 1994, Openshaw 1993). They appear to provide superior levels of performance when compared with unconstrained conventional models. In many practical situations, however, we have - in addition to the spatial interaction data itself - some information about various accounting constraints on the predicted flows. In principle, there are two ways to incorporate accounting constraints in neural spatial interaction modeling. The required constraint properties can be built into the post-processing stage, or they can be built directly into the model structure. While the first way is relatively straightforward, it suffers from the disadvantage of being inefficient. It will also result in a model which does not inherently respect the constraints. Thus we follow the second way. In this paper we present a novel class of neural spatial interaction models that incorporate origin-specific constraints into the model structure using product units rather than summation units at the hidden layer and softmax output units at the output layer. Product unit neural networks are powerful because of their ability to handle higher order combinations of inputs. But parameter estimation by standard techniques such as the gradient descent technique may be difficult. The performance of this novel class of spatial interaction models will be demonstrated by using the Austrian interregional traffic data and the conventional singly constrained spatial interaction model of the gravity type as benchmark. References Fischer M M (1998) Computational neural networks: A new paradigm for spatial analysis Environment and Planning A 30 (10): 1873-1891 Fischer M M, Gopal S (1994) Artificial neural networks: A new approach to modelling interregional telecommunciation flows, Journal of Regional Science 34(4): 503-527 Openshaw S (1993) Modelling spatial interaction using a neural net. In Fischer MM, Nijkamp P (eds) Geographical information systems, spatial modelling, and policy evaluation, pp. 147-164. Springer, Berlin

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units
    • …
    corecore