26,459 research outputs found

    Similarity-based heterogeneous neuron models

    Get PDF
    This paper introduces a general class of neuron models, accepting heterogeneous inputs in the form of mixtures of continuous (crisp or fuzzy) numbers, linguistic information, and discrete (either ordinal or nominal) quantities, with provision also for missing information. Their internal stimulation is based on an explicit similarity relation between the input and weight tuples (which are also heterogeneous). The framework is comprehensive and several models can be derived as instances --in particular, two of the commonly used models are shown to compute a specific similarity function provided all inputs are real-valued and complete. An example family of models defined by composition of a Gower-based similarity with a sigmoid function is shown to lead to network designs (Heterogeneous Neural Networks) capable of learning from non-trivial data sets with a remarkable effectiveness, comparable to that of classical models.Peer ReviewedPostprint (author's final draft

    Using fuzzy heterogeneous neural networks to learn a model of the central nervous system control

    Get PDF
    Fuzzy heterogeneous networks based on similarity are recently introduced feed-forward neural network models composed by neurons of a general class whose inputs are mixtures of continuous (crisp and/or fuzzy) with discrete quantities, admitting also missing data. These networks have activation functions based on similarity relations between inputs and neuron weights. They can be coupled with classical neurons in hybrid network architectures, trained with genetic algorithms. This paper compares the e ectivity of this fuzzy heterogeneous model based on similarity with the classical feed-forward one (scalar-product driven and using crisp quantities) in a time-series prediction setting. The results obtained show a remarkable increasing performance when departing from the classical neuron and a comparable one when confronted with other current powerful techniques, such as the FIR methodology.Peer ReviewedPostprint (author's final draft

    Heterogeneous neural networks: theory and applications

    Get PDF
    Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons. La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona. La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita. Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia. En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified. The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed. The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.Postprint (published version

    Fuzzy heterogeneous neurons for imprecise classification problems

    Get PDF
    In the classical neuron model, inputs are continuous real-valued quantities. However, in many important domains from the real world, objects are described by a mixture of continuous and discrete variables, usually containing missing information and uncertainty. In this paper, a general class of neuron models accepting heterogeneous inputs in the form of mixtures of continuous (crisp and/or fuzzy) and discrete quantities admitting missing data is presented. From these, several particular models can be derived as instances and different neural architectures constructed with them. Such models deal in a natural way with problems for which information is imprecise or even missing. Their possibilities in classification and diagnostic problems are here illustrated by experiments with data from a real-world domain in the field of environmental studies. These experiments show that such neurons can both learn and classify complex data very effectively in the presence of uncertain information.Peer ReviewedPostprint (author's final draft

    Fuzzy heterogeneous neural networks for signal forecasting

    Get PDF
    Fuzzy heterogeneous neural networks are recently introduced models based on neurons accepting heterogeneous inputs (i.e. mixtures of numerical and non-numerical information possibly with missing data) with either crisp or imprecise character, which can be coupled with classical neurons. This paper compares the effectiveness of this kind of networks with time-delay and recurrent architectures that use classical neuron models and training algorithms in a signal forecasting problem, in the context of finding models of the central nervous system controllers.Peer ReviewedPostprint (author's final draft

    Heterogeneous Kohonen networks

    Get PDF
    A large number of practical problems involves elements that are described as a mixture of qualitative and quantitative infomation, and whose description is probably incomplete. The self-organizing map is an effective tool for visualization of high-dimensional continuous data. In this work, we extend the network and training algorithm to cope with heterogeneous information, as well as missing values. The classification performance on a collection of benchmarking data sets is compared in different configurations. Various visualization methods are suggested to aid users interpret post-training results.Peer ReviewedPostprint (author's final draft

    Similarity networks for classification: a case study in the Horse Colic problem

    Get PDF
    This paper develops a two-layer neural network in which the neuron model computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the mean of the partial input-weight similarities. The resulting neuron model is capable of dealing directly with variables of potentially different nature (continuous, fuzzy, ordinal, categorical). There is also provision for missing values. The network is trained using a two-stage procedure very similar to that used to train a radial basis function (RBF) neural network. The network is compared to two types of RBF networks in a non-trivial dataset: the Horse Colic problem, taken as a case study and analyzed in detail.Postprint (published version

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
    • …
    corecore