403 research outputs found

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Computational neural learning formalisms for manipulator inverse kinematics

    Get PDF
    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints

    State Differentiation by Transient Truncation in Coupled Threshold Dynamics

    Full text link
    Dynamics with a threshold input--output relation commonly exist in gene, signal-transduction, and neural networks. Coupled dynamical systems of such threshold elements are investigated, in an effort to find differentiation of elements induced by the interaction. Through global diffusive coupling, novel states are found to be generated that are not the original attractor of single-element threshold dynamics, but are sustained through the interaction with the elements located at the original attractor. This stabilization of the novel state(s) is not related to symmetry breaking, but is explained as the truncation of transient trajectories to the original attractor due to the coupling. Single-element dynamics with winding transient trajectories located at a low-dimensional manifold and having turning points are shown to be essential to the generation of such novel state(s) in a coupled system. Universality of this mechanism for the novel state generation and its relevance to biological cell differentiation are briefly discussed.Comment: 8 pages. Phys. Rev. E. in pres

    Corticonic models of brain mechanisms underlying cognition and intelligence

    Get PDF
    The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it:(a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime bymeans of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo–cortical loop, (e) distinguishes between redundant (structured)and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo–cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code underlying intelligence and other higher level brain functions. Physics of Life Reviews 4 (2007) 223–252 © 2007 Elsevier B.V. All rights reserved

    Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Get PDF
    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate

    Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics

    Full text link
    Neural activity patterns related to behavior occur at many scales in time and space from the atomic and molecular to the whole brain. Here we explore the feasibility of interpreting neurophysiological data in the context of many-body physics by using tools that physicists have devised to analyze comparable hierarchies in other fields of science. We focus on a mesoscopic level that offers a multi-step pathway between the microscopic functions of neurons and the macroscopic functions of brain systems revealed by hemodynamic imaging. We use electroencephalographic (EEG) records collected from high-density electrode arrays fixed on the epidural surfaces of primary sensory and limbic areas in rabbits and cats trained to discriminate conditioned stimuli (CS) in the various modalities. High temporal resolution of EEG signals with the Hilbert transform gives evidence for diverse intermittent spatial patterns of amplitude (AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize in the beta and gamma ranges at near zero time lags over long distances. The dominant mechanism for neural interactions by axodendritic synaptic transmission should impose distance-dependent delays on the EEG oscillations owing to finite propagation velocities. It does not. EEGs instead show evidence for anomalous dispersion: the existence in neural populations of a low velocity range of information and energy transfers, and a high velocity range of the spread of phase transitions. This distinction labels the phenomenon but does not explain it. In this report we explore the analysis of these phenomena using concepts of energy dissipation, the maintenance by cortex of multiple ground states corresponding to AM patterns, and the exclusive selection by spontaneous breakdown of symmetry (SBS) of single states in sequences.Comment: 31 page

    Artificial Neurons with Arbitrarily Complex Internal Structures

    Full text link
    Artificial neurons with arbitrarily complex internal structure are introduced. The neurons can be described in terms of a set of internal variables, a set activation functions which describe the time evolution of these variables and a set of characteristic functions which control how the neurons interact with one another. The information capacity of attractor networks composed of these generalized neurons is shown to reach the maximum allowed bound. A simple example taken from the domain of pattern recognition demonstrates the increased computational power of these neurons. Furthermore, a specific class of generalized neurons gives rise to a simple transformation relating attractor networks of generalized neurons to standard three layer feed-forward networks. Given this correspondence, we conjecture that the maximum information capacity of a three layer feed-forward network is 2 bits per weight.Comment: 22 pages, 2 figure
    corecore