16,080 research outputs found

    A Neural Model for Self Organizing Feature Detectors and Classifiers in a Network Hierarchy

    Full text link
    Many models of early cortical processing have shown how local learning rules can produce efficient, sparse-distributed codes in which nodes have responses that are statistically independent and low probability. However, it is not known how to develop a useful hierarchical representation, containing sparse-distributed codes at each level of the hierarchy, that incorporates predictive feedback from the environment. We take a step in that direction by proposing a biologically plausible neural network model that develops receptive fields, and learns to make class predictions, with or without the help of environmental feedback. The model is a new type of predictive adaptive resonance theory network called Receptive Field ARTMAP, or RAM. RAM self organizes internal category nodes that are tuned to activity distributions in topographic input maps. Each receptive field is composed of multiple weight fields that are adapted via local, on-line learning, to form smooth receptive ftelds that reflect; the statistics of the activity distributions in the input maps. When RAM generates incorrect predictions, its vigilance is raised, amplifying subtractive inhibition and sharpening receptive fields until the error is corrected. Evaluation on several classification benchmarks shows that RAM outperforms a related (but neurally implausible) model called Gaussian ARTMAP, as well as several standard neural network and statistical classifters. A topographic version of RAM is proposed, which is capable of self organizing hierarchical representations. Topographic RAM is a model for receptive field development at any level of the cortical hierarchy, and provides explanations for a variety of perceptual learning data.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Cortex, countercurrent context, and dimensional integration of lifetime memory

    Get PDF
    The correlation between relative neocortex size and longevity in mammals encourages a search for a cortical function specifically related to the life-span. A candidate in the domain of permanent and cumulative memory storage is proposed and explored in relation to basic aspects of cortical organization. The pattern of cortico-cortical connectivity between functionally specialized areas and the laminar organization of that connectivity converges on a globally coherent representational space in which contextual embedding of information emerges as an obligatory feature of cortical function. This brings a powerful mode of inductive knowledge within reach of mammalian adaptations, a mode which combines item specificity with classificatory generality. Its neural implementation is proposed to depend on an obligatory interaction between the oppositely directed feedforward and feedback currents of cortical activity, in countercurrent fashion. Direct interaction of the two streams along their cortex-wide local interface supports a scheme of "contextual capture" for information storage responsible for the lifelong cumulative growth of a uniquely cortical form of memory termed "personal history." This approach to cortical function helps elucidate key features of cortical organization as well as cognitive aspects of mammalian life history strategies

    Visuospatial coding as ubiquitous scaffolding for human cognition

    Get PDF
    For more than 100 years we have known that the visual field is mapped onto the surface of visual cortex, imposing an inherently spatial reference frame on visual information processing. Recent studies highlight visuospatial coding not only throughout visual cortex, but also brain areas not typically considered visual. Such widespread access to visuospatial coding raises important questions about its role in wider cognitive functioning. Here, we synthesise these recent developments and propose that visuospatial coding scaffolds human cognition by providing a reference frame through which neural computations interface with environmental statistics and task demands via perception–action loops

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Modeling biological face recognition with deep convolutional neural networks

    Full text link
    Deep convolutional neural networks (DCNNs) have become the state-of-the-art computational models of biological object recognition. Their remarkable success has helped vision science break new ground and recent efforts have started to transfer this achievement to research on biological face recognition. In this regard, face detection can be investigated by comparing face-selective biological neurons and brain areas to artificial neurons and model layers. Similarly, face identification can be examined by comparing in vivo and in silico multidimensional "face spaces". In this review, we summarize the first studies that use DCNNs to model biological face recognition. On the basis of a broad spectrum of behavioral and computational evidence, we conclude that DCNNs are useful models that closely resemble the general hierarchical organization of face recognition in the ventral visual pathway and the core face network. In two exemplary spotlights, we emphasize the unique scientific contributions of these models. First, studies on face detection in DCNNs indicate that elementary face selectivity emerges automatically through feedforward processing even in the absence of visual experience. Second, studies on face identification in DCNNs suggest that identity-specific experience and generative mechanisms facilitate this particular challenge. Taken together, as this novel modeling approach enables close control of predisposition (i.e., architecture) and experience (i.e., training data), it may be suited to inform long-standing debates on the substrates of biological face recognition.Comment: 41 pages, 2 figures, 1 tabl

    Prä- und postnatale Entwicklung topographischer Transformationen im Gehirn

    Get PDF
    This dissertation connects two independent fields of theoretical neuroscience: on the one hand, the self-organization of topographic connectivity patterns, and on the other hand, invariant object recognition, that is the recognition of objects independently of their various possible retinal representations (for example due to translations or scalings). The topographic representation is used in the presented approach, as a coordinate system, which then allows for the implementation of invariance transformations. Hence this study shows, that it is possible that the brain self-organizes before birth, so that it is able to invariantly recognize objects immediately after birth. Besides the core hypothesis that links prenatal work with object recognition, advancements in both fields themselves are also presented. In the beginning of the thesis, a novel analytically solvable probabilistic generative model for topographic maps is introduced. And at the end of the thesis, a model that integrates classical feature-based ideas with the normalization-based approach is presented. This bilinear model makes use of sparseness as well as slowness to implement "optimal" topographic representations. It is therefore a good candidate for hierarchical processing in the brain and for future research.Die vorliegende Arbeit verbindet zwei bisher unabhängig untersuchte Gebiete der theoretischen Neurowissenschaften: zum Einen die vorgeburtliche Selbstorganisation topographischer Verbindungsstrukturen und zum Anderen die invariante Objekterkennung, das heisst, die Erkennung von Objekten trotz ihrer mannigfaltigen retinalen Darstellungen (zum Beispiel durch Verschiebungen oder Skalierungen). Die topographische Repräsentierung wird hierbei während der Selbstorganisation als Koordinatensystem genutzt, um Invarianztransformationen zu implementieren. Dies zeigt die Möglichkeit auf, dass sich das Gehirn bereits vorgeburtlich detailliert selbstorganisieren kann, um nachgeburtlich sofort invariant Erkennen zu können. Im Detail führt Kapitel 2 in ein neues, probabilistisch generatives und analytisch lösbares Modell zur Ontogenese topographischer Transformationen ein. Dem Modell liegt die Annahme zugrunde, dass Ausgabezellen des Systems nicht völlig unkorreliert sind, sondern eine a priori gegebene Korrelation erreichen wollen. Da die Eingabezellen nachbarschaftskorreliert sind, hervorgerufen durch retinale Wellen, ergibt sich mit der Annahme rein erregender Verbindungen eine eindeutige topographische synaptische Verbindungsstruktur. Diese entspricht der bei vielen Spezies gefundenen topographischen Karten, z.B. der Retinotopie zwischen der Retina und dem LGN, oder zwischen dem LGN und dem Neokortex. Kapitel 3 nutzt eine abstraktere Formulierung des Retinotopiemechanismus, welche durch adiabitische Elimination der Aktivitätsvariablen erreicht wird, um den Effekt retinaler Wellen auf ein Modell höherer kortikaler Informationsverarbeitung zu untersuchen. Zu diesem Zweck wird der Kortex vereinfacht als bilineares Modell betrachtet, um einfache modulatorische Nichtlinearitäten mit in Betracht ziehen zu können. Zusätzlich zu den Ein- und Ausgabezellen kommen in diesem Modell Kontrolleinheiten zum Einsatz, welche den Informationsfluss aktiv steuern können und sich durch Wettbewerb und pränatalem Lernen auf verschiedene Muster retinaler Wellen spezialisieren. Die Ergebnisse zeigen, dass die entstehenden Verbindungsstrukturen affinen topographischen Abbildungen (insbesondere Translation, Skalierung und Orientierung) entsprechen, die nach Augenöffnen invariante Erkennung ermöglichen, da sie Objekte in der Eingabe in eine normalisierte Repräsentierung transformieren können. Das Modell wird für den eindimensionalen Fall ausführlich analysiert und die Funktionalität für den biologisch relevanteren zweidimensionalen Fall aufgezeigt. Kapitel 4 verallgemeinert das bilineare Modell des dritten Kapitels zu einem mehrschichtigen Modell, die shifter curcuits''. Diese ermöglichen eine logarithmisch in der Anzahl der Eingabezellen wachsende Anzahl an Synapsen, statt einer prohibitiv quadratischen Anzahl. Ausgenutzt wird die Orthogonalität von Translationen im Raum der Verbindungsstrukturen um diese durch harten Wettbewerb an einzelnen Synapsen zu organisieren. Neurobiologisch ist dieser Mechanismus durch Wettbewerb um einen wachstumsregulierenden Transmitter realisierbar. Kapitel 5 nutzt Methoden des probabilistischen Lernens, um das bilineare Modell auf das Lernen von optimalen Repräsentation der Eingabestatistiken zu optimieren. Da statistischen Methoden zweiter Ordnung, wie zum Beispiel das generative Modell aus Kapitel 2, keine lokalisierten rezeptiven Felder ermöglichen und somit keine (örtliche) Topographie möglich ist, wird sparseness'' verwendet um statistischen Abhängigkeiten höherer Ordnung zu lernen und gleichzeitig Topographie zu implementieren. Anwendungen des so formulierten Modells auf natürliche Bilder zeigen, dass lokalisierte, bandpass filternde rezeptive Felder entstehen, die primären kortikalen rezeptiven Feldern stark ähneln. Desweiteren entstehen durch die erzwungene Topographie Orientierungs- und Frequenzkarten, die ebenfalls kortikalen Karten ähneln. Eine Untersuchung des Modells mit zusätzlicher slowness'' der Ausgabezellen und in zeitlicher Nähe gezeigten transformierten natürlichen Eingabemustern zeigt, dass verschiedene Kontrolleinheiten konsistente und den Eingabetransformationen entsprechende rezeptive Felder entwickeln und somit invariante Darstellungen bezüglich der gezeigten Eingaben entwickeln
    corecore