263 research outputs found

    Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding

    Get PDF
    Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness

    Prolegomena to a neurocomputational architecture for human grammatical encoding and decoding

    No full text
    The study develops a neurocomputational architecture for grammatical processing in language production and language comprehension (grammatical encoding and decoding, respectively). It seeks to answer two questions. First, how is online syntactic structure formation of the complexity required by natural-language grammars possible in a fixed, preexisting neural network without the need for online creation of new connections or associations? Second, is it realistic to assume that the seemingly disparate instantiations of syntactic structure formation in grammatical encoding and grammatical decoding can run on the same neural infrastructure? This issue is prompted by accumulating experimental evidence for the hypothesis that the mechanisms for grammatical decoding overlap with those for grammatical encoding to a considerable extent, thus inviting the hypothesis of a single “grammatical coder.” The paper answers both questions by providing the blueprint for a syntactic structure formation mechanism that is entirely based on prewired circuitry (except for referential processing, which relies on the rapid learning capacity of the hippocampal complex), and can subserve decoding as well as encoding tasks. The model builds on the “Unification Space” model of syntactic parsing developed by Vosse & Kempen (2000, 2008, 2009). The design includes a neurocomputational mechanism for the treatment of an important class of grammatical movement phenomena

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    How models of canonical microcircuits implement cognitive functions

    Get PDF
    Major cognitive functions such as language, memory, and decision-making are thought to rely on distributed networks of a large number of fundamental neural elements, called canonical microcircuits. A mechanistic understanding of the interaction of these canonical microcircuits promises a better comprehension of cognitive functions as well as their potential disorders and corresponding treatment techniques. This thesis establishes a generative modeling framework that rests on canonical microcircuits and employs it to investigate composite mechanisms of cognitive functions. A generic, biologically plausible neural mass model was derived to parsimoniously represent conceivable architectures of canonical microcircuits. Time domain simulations and bifurcation and stability analyses were used to evaluate the model’s capability for basic information processing operations in response to transient stimulations, namely signal flow gating and working memory. Analysis shows that these basic operations rest upon the bistable activity of a neural population and the selectivity for the stimulus’ intensity and temporal consistency and transiency. In the model’s state space, this selectivity is marked by the distance of the system’s working point to a saddle-node bifurcation and the existence of a Hopf separatrix. The local network balance, in regard of synaptic gains, is shown to modify the model’s state space and thus its operational repertoire. Among the investigated architectures, only a three-population model that separates input-receiving and output-emitting excitatory populations exhibits the necessary state space characteristics. It is thus specified as minimal canonical microcircuit. In this three-population model, facilitative feedback information modifies the retention of sensory feedforward information. Consequently, meta-circuits of two hierarchically interacting minimal canonical microcircuits feature a temporal processing history that enables state-dependent processing operations. The relevance of these composite operations is demonstrated for the neural operations of priming and structure-building. Structure-building, that is the sequential and selective activation of neural circuits, is identified as an essential mechanism in a neural network for syntax parsing. This insight into cognitive processing proves the modeling framework’s potential in neurocognitive research. This thesis substantiates the connectionist notion that higher processing operations emerge from the combination of minimal processing elements and advances the understanding how cognitive functions are implemented in the neocortical matter of the brain.Kognitive FĂ€higkeiten wie Sprache, GedĂ€chtnis und Entscheidungsfindung resultieren vermutlich aus der Interaktion vieler fundamentaler neuronaler Elemente, sogenannter kanonischer Schaltkreise. Eine vertiefte Einsicht in das Zusammenwirken dieser kanonischen Schaltkreise verspricht ein besseres VerstĂ€ndnis kognitiver FĂ€higkeiten, möglicher Funktionsstörungen und TherapieansĂ€tze. Die vorliegende Dissertation untersucht ein generatives Modell kanonischer Schaltkreise und erforscht mit dessen Hilfe die Zusammensetzung kognitiver FĂ€higkeiten aus konstitutiven Mechanismen neuronaler Interaktion. Es wurde ein biologisch-plausibles neuronales Massenmodell erstellt, das mögliche Architekturen kanonischer Schaltkreise generisch berĂŒcksichtigt. Anhand von Simulationen sowie Bifurkations- und StabilitĂ€tsanalysen wurde untersucht, inwiefern das Modell grundlegende Operationen der Informationsverarbeitung, nĂ€mlich Selektion und temporĂ€re Speicherung einer transienten Stimulation, unterstĂŒtzt. Die Untersuchung zeigt, dass eine bistabile AktivitĂ€t einer neuronalen Population und die Beurteilung der Salienz des Signals den grundlegenden Operationen zugrunde liegen. Die Beurteilung der Salienz beruht dabei hinsichtlich der SignalstĂ€rke auf dem Abstand des Arbeitspunktes zu einer Sattel-Knoten-Bifurkation und hinsichtlich der Signalkonsistenz und-–vergĂ€nglichkeit auf einer Hopf-Separatrix im Zustandsraum des Systems. Die Netzwerkbalance modifiziert diesen Zustandsraum und damit die FunktionsfĂ€higkeit des Modells. Nur ein Drei-Populationenmodell mit getrennten erregenden Populationen fĂŒr Signalempfang und -emission weist die notwendigen Bedingungen im Zustandsraum auf und genĂŒgt der Definition eines minimalen kanonischen Schaltkreises. In diesem Drei-Populationenmodell erleichtert ein Feedbacksignal die SpeicherfĂ€higkeit fĂŒr sensorische Feedforwardsignale. Dementsprechend weisen hierarchisch interagierende minimale kanonische Schaltkreise ein zeitliches VerarbeitungsgedĂ€chtnis auf, das zustandsabhĂ€ngige Verarbeitungsoperationen erlaubt. Die Bedeutung dieser konstitutiven Operationen wird fĂŒr die neuronalen Operationen Priming und Strukturbildung verdeutlicht. Letztere wurde als wichtiger Mechanismus in einem Netzwerk zur Syntaxanalyse identifiziert und belegt das Potential des Modellansatzes fĂŒr die neurokognitive Forschung. Diese Dissertation konkretisiert die konnektionistische Ansicht höhere Verarbeitungsoperationen als Ergebnis der Kombination minimaler Verarbeitungselemente zu verstehen und befördert das VerstĂ€ndnis fĂŒr die Frage wie kognitive FĂ€higkeiten im Nervengewebe des Gehirns implementiert sind

    A new unifying account of the roles of neuronal entrainment

    Get PDF
    Rhythms are a fundamental and defining feature of neuronal activity in animals including humans. This rhythmic brain activity interacts in complex ways with rhythms in the internal and external environment through the phenomenon of ‘neuronal entrainment’, which is attracting increasing attention due to its suggested role in a multitude of sensory and cognitive processes. Some senses, such as touch and vision, sample the environment rhythmically, while others, like audition, are faced with mostly rhythmic inputs. Entrainment couples rhythmic brain activity to external and internal rhythmic events, serving fine-grained routing and modulation of external and internal signals across multiple spatial and temporal hierarchies. This interaction between a brain and its environment can be experimentally investigated and even modified by rhythmic sensory stimuli or invasive and non-invasive neuromodulation techniques. We provide a comprehensive overview of the topic and propose a theoretical framework of how neuronal entrainment dynamically structures information from incoming neuronal, bodily and environmental sources. We discuss the different types of neuronal entrainment, the conceptual advances in the field, and converging evidence for general principles

    Natural Language Syntax Complies with the Free-Energy Principle

    Full text link
    Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design - such as "minimal search" criteria from theoretical syntax - adhere to the FEP. This affords a greater degree of explanatory power to the FEP - with respect to higher language functions - and offers linguistics a grounding in first principles with respect to computability. We show how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference

    Toward the language oscillogenome

    Get PDF
    Language has been argued to arise, both ontogenetically and phylogenetically, from specific patterns of brain wiring. We argue that it can further be shown that core features of language processing emerge from particular phasal and cross-frequency coupling properties of neural oscillations; what has been referred to as the language 'oscillome.' It is expected that basic aspects of the language oscillome result from genetic guidance, what we will here call the language 'oscillogenome,' for which we will put forward a list of candidate genes. We have considered genes for altered brain rhythmicity in conditions involving language deficits: autism spectrum disorders, schizophrenia, specific language impairment and dyslexia. These selected genes map on to aspects of brain function, particularly on to neurotransmitter function. We stress that caution should be adopted in the construction of any oscillogenome, given the range of potential roles particular localized frequency bands have in cognition. Our aim is to propose a set of genome-to-language linking hypotheses that, given testing, would grant explanatory power to brain rhythms with respect to language processing and evolution.Economic and Social Research Council scholarship 1474910Ministerio de Economía y Competitividad (España) FFI2016-78034-C2-2-
    • 

    corecore