12 research outputs found

    Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Get PDF
    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate

    THINKING AND LANGUAGE: EEG MATURATION AND MODEL OF CONTEXTUAL LANGUAGE LEARNING

    Get PDF
    Abstract. A modeling of verbal modulation of information is an important but extremely complex task, still waiting to be fully accomplished in future hierarchical neural networks. However, it seems that basic brain mechanisms responsible for semantic/pragmatic/syntactic organization of natural language are known (Pribram, 1977). According to the oscillator model of Ellias and Grossberg (1975), the EEG rhythmicity is realistically predicted in such a way that an increase in the input causes an increase in the frequency of oscillations, and decrease in their amplitude, offering an unified explanation of EEG waves ranging from δ to γ) -which might basically be also the mechanism of information ascending upon the frontolimbic-amplification from the lower-frequency (δ, θ) unconscious form of primordial subliminal thought to the higher-frequency (α, β, γ) conscious thought (Rakovic, 1997)! It should be then pointed out that frontolimbic-amplification mechanism of pragmatic processing, in combination with the increase of dominant EEG frequency from δ to γ brainwaves during an ontogenesis (Petersen et al, 1974), implies that the mother tongue is generally memorized at low-frequency δ and θ levels (later being unconscious in adults), in contrast to second and even further languages in bilinguals and multilinguals which are memorized at high-frequency α, β, and γ levels (later being conscious in adults). This implies that second and further languages are being hardly incorporating at unconscious (automatic) levels, save only from contextual learning which enables unconscious processing of contexts -which might provide differences of the language learning in childhood and adulthood, as well as in school and in living environment (Rakovic, 1997)! Keywords: Neural networks, brainwaves, modeling of psychological functions (learning, memorizing, consciousness, thinking, language). The prevailing scientific paradigm considers information processing inside the central nervous system as occurring through hierarchically organized and interconnected neural networks. For instance, the visual information is firstly hierarchically processed at the level of retina (from the photoreceptory rods and cones, to the ganglion cells), to be then hierarchically proceeded within the levels of primary, secondary, and tertiary sensory and interpretatory cortical regions (all of them being additionally constituted of hierarchies of several neural networks) Along with the development of experimental techniques enabling physiological investigation of interactions of hierarchically interconnected neighboring levels of biological neural networks, significant contribution in establishing the neural network paradigm was given by theoretical breakthroughs in this field during the past two decade

    A Novel Chaotic Neural Network Using Memristive Synapse with Applications in Associative Memory

    Get PDF
    Chaotic Neural Network, also denoted by the acronym CNN, has rich dynamical behaviors that can be harnessed in promising engineering applications. However, due to its complex synapse learning rules and network structure, it is difficult to update its synaptic weights quickly and implement its large scale physical circuit. This paper addresses an implementation scheme of a novel CNN with memristive neural synapses that may provide a feasible solution for further development of CNN. Memristor, widely known as the fourth fundamental circuit element, was theoretically predicted by Chua in 1971 and has been developed in 2008 by the researchers in Hewlett-Packard Laboratory. Memristor based hybrid nanoscale CMOS technology is expected to revolutionize the digital and neuromorphic computation. The proposed memristive CNN has four significant features: (1) nanoscale memristors can simplify the synaptic circuit greatly and enable the synaptic weights update easily; (2) it can separate stored patterns from superimposed input; (3) it can deal with one-to-many associative memory; (4) it can deal with many-to-many associative memory. Simulation results are provided to illustrate the effectiveness of the proposed scheme

    Exploring the landscapes of "computing": digital, neuromorphic, unconventional -- and beyond

    Get PDF
    The acceleration race of digital computing technologies seems to be steering toward impasses -- technological, economical and environmental -- a condition that has spurred research efforts in alternative, "neuromorphic" (brain-like) computing technologies. Furthermore, since decades the idea of exploiting nonlinear physical phenomena "directly" for non-digital computing has been explored under names like "unconventional computing", "natural computing", "physical computing", or "in-materio computing". This has been taking place in niches which are small compared to other sectors of computer science. In this paper I stake out the grounds of how a general concept of "computing" can be developed which comprises digital, neuromorphic, unconventional and possible future "computing" paradigms. The main contribution of this paper is a wide-scope survey of existing formal conceptualizations of "computing". The survey inspects approaches rooted in three different kinds of background mathematics: discrete-symbolic formalisms, probabilistic modeling, and dynamical-systems oriented views. It turns out that different choices of background mathematics lead to decisively different understandings of what "computing" is. Across all of this diversity, a unifying coordinate system for theorizing about "computing" can be distilled. Within these coordinates I locate anchor points for a foundational formal theory of a future computing-engineering discipline that includes, but will reach beyond, digital and neuromorphic computing.Comment: An extended and carefully revised version of this manuscript has now (March 2021) been published as "Toward a generalized theory comprising digital, neuromorphic, and unconventional computing" in the new open-access journal Neuromorphic Computing and Engineerin

    A Theory of Cortical Neural Processing.

    Get PDF
    This dissertation puts forth an original theory of cortical neural processing that is unique in its view of the interplay of chaotic and stable oscillatory neurodynamics and is meant to stimulate new ideas in artificial neural network modeling. Our theory is the first to suggest two new purposes for chaotic neurodynamics: (i) as a natural means of representing the uncertainty in the outcome of performed tasks, such as memory retrieval or classification, and (ii) as an automatic way of producing an economic representation of distributed information. We developed new models, to better understand how the cerebral cortex processes information, which led to our theory. Common to these models is a neuron interaction function that alternates between excitatory and inhibitory neighborhoods. Our theory allows characteristics of the input environment to influence the structural development of the cortex. We view low intensity chaotic activity as the a priori uncertain base condition of the cortex, resulting from the interaction of a multitude of stronger potential responses. Data, distinguishing one response from many others, drives bifurcations back toward the direction of less complex (stable) behavior. Stability appears as temporary bubble-like clusters within the boundaries of cortical columns and begins to propagate through frequency sensitive and non-specific neurons. But this is limited by destabilizing long-path connections. An original model of the post-natal development of ocular dominance columns in the striate cortex is presented and compared to autoradiographic images from the literature with good matching results. Finally, experiments are shown to favor computed update order over traditional approaches for better performance of the pattern completion process

    Sensor-based machine olfaction with neuromorphic models of the olfactory system

    Get PDF
    Electronic noses combine an array of cross-selective gas sensors with a pattern recognition engine to identify odors. Pattern recognition of multivariate gas sensor response is usually performed using existing statistical and chemometric techniques. An alternative solution involves developing novel algorithms inspired by information processing in the biological olfactory system. The objective of this dissertation is to develop a neuromorphic architecture for pattern recognition for a chemosensor array inspired by key signal processing mechanisms in the olfactory system. Our approach can be summarized as follows. First, a high-dimensional odor signal is generated from a chemical sensor array. Three approaches have been proposed to generate this combinatorial and high dimensional odor signal: temperature-modulation of a metal-oxide chemoresistor, a large population of optical microbead sensors, and infrared spectroscopy. The resulting high-dimensional odor signals are subject to dimensionality reduction using a self-organizing model of chemotopic convergence. This convergence transforms the initial combinatorial high-dimensional code into an organized spatial pattern (i.e., an odor image), which decouples odor identity from intensity. Two lateral inhibitory circuits subsequently process the highly overlapping odor images obtained after convergence. The first shunting lateral inhibition circuits perform gain control enabling identification of the odorant across a wide range of concentration. This shunting lateral inhibition is followed by an additive lateral inhibition circuit with center-surround connections. These circuits improve contrast between odor images leading to more sparse and orthogonal patterns than the one available at the input. The sharpened odor image is stored in a neurodynamic model of a cortex. Finally, anti-Hebbian/ Hebbian inhibitory feedback from the cortical circuits to the contrast enhancement circuits performs mixture segmentation and weaker odor/background suppression, respectively. We validate the models using experimental datasets and show our results are consistent with recent neurobiological findings

    Vlsi Implementation of Olfactory Cortex Model

    Get PDF
    This thesis attempts to implement the building blocks required for the realization of the biologically motivated olfactory neural model in silicon as the special purpose hardware. The olfactory model is originally developed by R. Granger, G. Lynch, and Ambros-Ingerson. CMOS analog integrated circuits were used for this purpose. All of the building blocks were fabricated using the MOSIS service and tested at our site. The results of this study can be used to realize a system level integration of the olfactory model.Electrical Engineerin

    Locally connected recurrent neural networks.

    Get PDF
    by Evan, Fung-yu Young.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves 161-166).List of Figures --- p.viList of Tables --- p.viiList of Graphs --- p.viiiAbstract --- p.ixChapter Part I --- Learning AlgorithmsChapter 1 --- Representing Time in Connectionist Models --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Temporal Sequences --- p.2Chapter 1.2.1 --- Recognition Tasks --- p.2Chapter 1.2.2 --- Reproduction Tasks --- p.3Chapter 1.2.3 --- Generation Tasks --- p.4Chapter 1.3 --- Discrete Time v.s. Continuous Time --- p.4Chapter 1.4 --- Time Delay Neural Network (TDNN) --- p.4Chapter 1.4.1 --- Delay Elements in the Connections --- p.5Chapter 1.4.2 --- NETtalk: An Application of TDNN --- p.7Chapter 1.4.3 --- Drawbacks of TDNN --- p.8Chapter 1.5 --- Networks with Context Units --- p.8Chapter 1.5.1 --- Jordan's Network --- p.9Chapter 1.5.2 --- Elman's Network --- p.10Chapter 1.5.3 --- Other Architectures --- p.14Chapter 1.5.4 --- Drawbacks of Using Context Units --- p.15Chapter 1.6 --- Recurrent Neural Networks --- p.16Chapter 1.6.1 --- Hopfield Models --- p.17Chapter 1.6.2 --- Fully Recurrent Neural Networks --- p.20Chapter A. --- EXAMPLES OF USING RECURRENT NETWORKS --- p.22Chapter 1.7 --- Our Objective --- p.25Chapter 2 --- Learning Algorithms for Recurrent Neural Networks --- p.27Chapter 2.1 --- Introduction --- p.27Chapter 2.2 --- Gradient Descent Methods --- p.29Chapter 2.2.1 --- Backpropagation Through Time (BPTT) --- p.29Chapter 2.2.2 --- Real Time Recurrent Learning Rule (RTRL) --- p.30Chapter A. --- RTRL WITH TEACHER FORCING --- p.32Chapter B. --- TERMINAL TEACHER FORCING --- p.33Chapter C. --- CONTINUOUS TIME RTRL --- p.33Chapter 2.2.3 --- Variants of RTRL --- p.34Chapter A. --- SUB GROUPED RTRL --- p.34Chapter B. --- A FIXED SIZE STORAGE 0(n3) TIME COMPLEXITY LEARNGING RULE --- p.35Chapter 2.3 --- Non-Gradient Descent Methods --- p.37Chapter 2.3.1 --- Neural Bucket Brigade (NBB) --- p.37Chapter 2.3.2 --- Temporal Driven Method (TO) --- p.38Chapter 2.4 --- Comparison between Different Approaches --- p.39Chapter 2.5 --- Conclusion --- p.41Chapter 3 --- Locally Connected Recurrent Networks --- p.43Chapter 3.1 --- Introduction --- p.43Chapter 3.2 --- Locally Connected Recurrent Networks --- p.44Chapter 3.2.1 --- Network Topology --- p.44Chapter 3.2.2 --- Subgrouping --- p.46Chapter 3.2.3 --- Learning Algorithm --- p.47Chapter 3.2.4 --- Continuous Time Learning Algorithm --- p.50Chapter 3.3 --- Analysis --- p.51Chapter 3.3.1 --- Time Complexity --- p.51Chapter 3.3.2 --- Space Complexity --- p.51Chapter 3.3.3 --- Local Computations in Time and Space --- p.51Chapter 3.4 --- Running on Parallel Architectures --- p.52Chapter 3.4.1 --- Mapping the Algorithm to Parallel Architectures --- p.52Chapter 3.4.2 --- Parallel Learning Algorithm --- p.53Chapter 3.4.3 --- Analysis --- p.54Chapter 3.5 --- Ring-Structured Recurrent Network (RRN) --- p.55Chapter 3.6 --- Comparison between RRN and RTRL in Sequence Recognition --- p.55Chapter 3.6.1 --- Training Sets and Testing Sequences --- p.56Chapter 3.6.2 --- Comparison in Training Speed --- p.58Chapter 3.6.3 --- Comparison in Recalling Power --- p.59Chapter 3.7 --- Comparison between RRN and RTRL in Time Series Prediction --- p.59Chapter 3.7.1 --- Comparison in Training Speed --- p.62Chapter 3.7.2 --- Comparison in Predictive Power --- p.63Chapter 3.8 --- Conclusion --- p.65Chapter Part II --- ApplicationsChapter 4 --- Sequence Recognition by Ring-Structured Recurrent Networks --- p.67Chapter 4.1 --- Introduction --- p.67Chapter 4.2 --- Related Works --- p.68Chapter 4.2.1 --- Feedback Multilayer Perceptron (FMLP) --- p.68Chapter 4.2.2 --- Back Propagation Unfolded Recurrent Rule (BURR) --- p.69Chapter 4.3 --- Experimental Details --- p.71Chapter 4.3.1 --- Network Architecture --- p.71Chapter 4.3.2 --- Input/Output Representations --- p.72Chapter 4.3.3 --- Training Phase --- p.73Chapter 4.3.4 --- Recalling Phase --- p.73Chapter 4.4 --- Experimental Results --- p.74Chapter 4.4.1 --- Temporal Memorizing Power --- p.74Chapter 4.4.2 --- Time Warping Performance --- p.80Chapter 4.4.3 --- Fault Tolerance --- p.85Chapter 4.4.4 --- Learning Rate --- p.87Chapter 4.5 --- Time Delay --- p.88Chapter 4.6 --- Conclusion --- p.91Chapter 5 --- Time Series Prediction --- p.92Chapter 5.1 --- Introduction --- p.92Chapter 5.2 --- Modelling in Feedforward Networks --- p.93Chapter 5.3 --- Methodology with Recurrent Networks --- p.94Chapter 5.3.1 --- Network Structure --- p.94Chapter 5.3.2 --- Model Building - Training --- p.95Chapter 5.3.3 --- Model Diagnosis - Testing --- p.95Chapter 5.4 --- Training Paradigms --- p.96Chapter 5.4.1 --- A Quasiperiodic Series with White Noise --- p.96Chapter 5.4.2 --- A Chaotic Series --- p.97Chapter 5.4.3 --- Sunspots Numbers --- p.98Chapter 5.4.4 --- Hang Seng Index --- p.99Chapter 5.5 --- Experimental Results and Discussions --- p.99Chapter 5.5.1 --- A Quasiperiodic Series with White Noise --- p.101Chapter 5.5.2 --- Logistic Map --- p.103Chapter 5.5.3 --- Sunspots Numbers --- p.105Chapter 5.5.4 --- Hang Seng Index --- p.109Chapter 5.6 --- Conclusion --- p.112Chapter 6 --- Chaos in Recurrent Networks --- p.114Chapter 6.1 --- Introduction --- p.114Chapter 6.2 --- Important Features of Chaos --- p.115Chapter 6.2.1 --- First Return Map --- p.115Chapter 6.2.2 --- Long Term Unpredictability --- p.117Chapter 6.2.3 --- Sensitivity to Initial Conditions (SIC) --- p.118Chapter 6.2.4 --- Strange Attractor --- p.119Chapter 6.3 --- Chaotic Behaviour in Recurrent Networks --- p.120Chapter 6.3.1 --- Network Structure --- p.121Chapter 6.3.2 --- Dynamics in Training --- p.121Chapter 6.3.3 --- Dynamics in Testing --- p.122Chapter 6.4 --- Experiments and Discussions --- p.123Chapter 6.4.1 --- Henon Model --- p.123Chapter 6.4.2 --- Lorenz Model --- p.127Chapter 6.5 --- Conclusion --- p.134Chapter 7 --- Conclusion --- p.135Appendix A Series 1 Sine Function with White Noise --- p.137Appendix B Series 2 Logistic Map --- p.138Appendix C Series 3 Sunspots Numbers from 1700 to 1979 --- p.139Appendix D A Quasiperiodic Series with White Noise --- p.141Appendix E Hang Seng Daily Closing Index in 1991 --- p.142Appendix F Network Model for the Quasiperiodic Series with White Noise --- p.143Appendix G Network Model for the Logistic Map --- p.144Appendix H Network Model for the Sunspots Numbers --- p.145Appendix I Network Model for the Hang Seng Index --- p.146Appendix J Henon Model --- p.147Appendix K Network Model for the Henon Map --- p.150Appendix L Lorenz Model --- p.151Appendix M Network Model for the Lorenz Map --- p.159Bibliography --- p.16
    corecore