8,296 research outputs found

    A dynamic field model of ordinal and timing properties of sequential events

    Get PDF
    Recent evidence suggests that the neural mechanisms underlying memory for serial order and interval timing of sequential events are closely linked. We present a dynamic neural field model which exploits the existence and stability of multi-bump solutions with a gradient of activation to store serial order. The activation gradient is achieved by applying a state-dependent threshold accommodation process to the firing rate function. A field dynamics of lateral inhibition type is used in combination with a dynamics for the baseline activity to recall the sequence from memory. We show that depending on the time scale of the baseline dynamics the precise temporal structure of the original sequence may be retrieved or a proactive timing of events may be achievedFundação para a Ciência e a Tecnologia (FCT) - Bolsa SFRH/BD/41179/200

    Learning joint representations for order and timing of perceptual-motor sequences: a dynamic neural field approach

    Get PDF
    Many of our everyday tasks require the control of the serial order and the timing of component actions. Using the dynamic neural field (DNF) framework, we address the learning of representations that support the performance of precisely time action sequences. In continuation of previous modeling work and robotics implementations, we ask specifically the question how feedback about executed actions might be used by the learning system to fine tune a joint memory representation of the ordinal and the temporal structure which has been initially acquired by observation. The perceptual memory is represented by a self-stabilized, multi-bump activity pattern of neurons encoding instances of a sensory event (e.g., color, position or pitch) which guides sequence learning. The strength of the population representation of each event is a function of elapsed time since sequence onset. We propose and test in simulations a simple learning rule that detects a mismatch between the expected and realized timing of events and adapts the activation strengths in order to compensate for the movement time needed to achieve the desired effect. The simulation results show that the effector-specific memory representation can be robustly recalled. We discuss the impact of the fast, activation-based learning that the DNF framework provides for robotics applications

    Learning a musical sequence by observation : a robotics implementation of a dynamic neural field model

    Get PDF
    We tested in a robotics experiment a dynamic neural field model for learning a precisely timed musical sequence. Based on neuro-plausible processing mechanisms, the model implements the idea that order and relative timing of events are stored in an integrated representation whereas the onset of sequence production is controlled by a separate process. Dynamic neural fields provide a rigorous theoretical framework to analyze and implement the necessary neural computations that bridge gaps between sensation and action in order to mediate working memory, action planing, and decision making. The robot first memorizes a short musical sequence performed by a human teacher by watching color coded keys on a screen, and then tries to execute the piece of music on a keyboard from memory without any external cues. The experimental results show that the robot is able to correct in very few demonstration-execution cycles initial sequencing and timing errors.The work received financial support from FCT - Fundação para a Ciência e Tecnologia within the Project Scope: PEst- OE/EEI/UI0319/2014, the Research Centers for Mathematics and Algoritmi through the FCT Pluriannual Funding Program, PhD and Post-doctoral Grants (SFRH/BD/41179/2007, SFRH/BD/48529/2008 and SFRH/BPD/71874/2010, financed by POPH-QREN-Type 4.1-Advanced Training, co-funded by the European Social Fund and national funds from MEC), and Project NETT: Neural Engineering Transformative Technologies, EU-FP7 ITN (nr.289146)

    Moving in time: simulating how neural circuits enable rhythmic enactment of planned sequences

    Full text link
    Many complex actions are mentally pre-composed as plans that specify orderings of simpler actions. To be executed accurately, planned orderings must become active in working memory, and then enacted one-by-one until the sequence is complete. Examples include writing, typing, and speaking. In cases where the planned complex action is musical in nature (e.g. a choreographed dance or a piano melody), it appears to be possible to deploy two learned sequences at the same time, one composed from actions and a second composed from the time intervals between actions. Despite this added complexity, humans readily learn and perform rhythm-based action sequences. Notably, people can learn action sequences and rhythmic sequences separately, and then combine them with little trouble (Ullén & Bengtsson 2003). Related functional MRI data suggest that there are distinct neural regions responsible for the two different sequence types (Bengtsson et al. 2004). Although research on musical rhythm is extensive, few computational models exist to extend and inform our understanding of its neural bases. To that end, this article introduces the TAMSIN (Timing And Motor System Integration Network) model, a systems-level neural network model capable of performing arbitrary item sequences in accord with any rhythmic pattern that can be represented as a sequence of integer multiples of a base interval. In TAMSIN, two Competitive Queuing (CQ) modules operate in parallel. One represents and controls item order (the ORD module) and the second represents and controls the sequence of inter-onset-intervals (IOIs) that define a rhythmic pattern (RHY module). Further circuitry helps these modules coordinate their signal processing to enable performative output consistent with a desired beat and tempo.Accepted manuscrip

    Rapid learning of complex sequences with time constraints: A dynamic neural field model

    Get PDF
    Many of our sequential activities require that behaviors must be both precisely timed and put in the proper order. This paper presents a neuro-computational model based on the theoretical framework of Dynamic Neural Fields that supports the rapid learning and flexible adaptation of coupled order-timing representations of sequential events. A key assumption is that elapsed time is encoded in the monotonic buildup of self-stabilized neural population activity representing event memory. A stable activation gradient over subpopulations carries the information of an entire sequence. With robotics applications in mind, we test the model in simulations of a learning by observation paradigm, in which the cognitive agent first memorizes the order and relative timing of observed events and, subsequently, recalls the information from memory taking potential speed constraints into account. Model robustness is tested by systematically varying sequence complexity along the temporal and the ordinal dimension. Furthermore, an adaptation rule is proposed that allows the agent to adjust in a single trial a learned timing pattern to a changing temporal context. The simulation results are discussed with respect to our goal to endow autonomous robots with the capacity to efficiently learn complex sequences with time constraints, supporting more natural human-robot interactions.FCT (Portuguese Foundation for Science and Technology) through the PhD fellowship PD/BD/128183/2016, European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) and national funds, through the FCT projects PTDC/MAT-APL/31393/2017 (NEUROFIELD) and POCI-01-0247-FEDER-039334, and RD Units Project Scope: UIDB/00319/2020 and UIDB/00013/202

    The importance of space and time in neuromorphic cognitive agents

    Full text link
    Artificial neural networks and computational neuroscience models have made tremendous progress, allowing computers to achieve impressive results in artificial intelligence (AI) applications, such as image recognition, natural language processing, or autonomous driving. Despite this remarkable progress, biological neural systems consume orders of magnitude less energy than today's artificial neural networks and are much more agile and adaptive. This efficiency and adaptivity gap is partially explained by the computing substrate of biological neural processing systems that is fundamentally different from the way today's computers are built. Biological systems use in-memory computing elements operating in a massively parallel way rather than time-multiplexed computing units that are reused in a sequential fashion. Moreover, activity of biological neurons follows continuous-time dynamics in real, physical time, instead of operating on discrete temporal cycles abstracted away from real-time. Here, we present neuromorphic processing devices that emulate the biological style of processing by using parallel instances of mixed-signal analog/digital circuits that operate in real time. We argue that this approach brings significant advantages in efficiency of computation. We show examples of embodied neuromorphic agents that use such devices to interact with the environment and exhibit autonomous learning

    Efficient learning of sequential tasks for collaborative robots: a neurodynamic approach

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica, Industrial e ComputadoresIn the recent years, there has been an increasing demand for collaborative robots able to interact and co operate with ordinary people in several human environments, sharing physical space and working closely with people in joint tasks, both within industrial and domestic environments. In some scenarios, these robots will come across tasks that cannot be fully designed beforehand, resulting in a need for flexibility and adaptation to the changing environments. This dissertation aims to endow robots with the ability to acquire knowledge of sequential tasks using the Programming by Demonstration (PbD) paradigm. Concretely, it extends the learning models - based on Dynamic Neural Fields (DNFs) - previously developed in the Mobile and Anthropomorphic Robotics Laboratory (MARLab), at the University of Minho, to the collaborative robot Sawyer, which is amongst the newest collaborative robots on the market. The main goal was to endow Sawyer with the ability to learn a sequential task from tutors’ demonstrations, through a natural and efficient process. The developed work can be divided into three main tasks: (1) first, a previously developed neuro-cognitive control architecture for extracting the sequential structure of a task was implemented and tested in Sawyer, combined with a Short-Term Memory (STM) mechanism to memorize a sequence in one-shot, aiming to reduce the number of demonstration trials; (2) second, the previous model was extended to incorporate workspace information and action selection in a Human-Robot Collaboration (HRC) scenario where robot and human co worker coordinate their actions to construct the structure; and (3) third, the STM mechanism was also extended to memorize ordinal and temporal aspects of the sequence, demonstrated by tutors with different behavior time scales. The models implemented contributed to a more intuitive and practical interaction with the robot for human co-workers. The STM model made the learning possible from few demonstrations to comply with the requirement of being an efficient method for learning. Moreover, the recall of the memorized information allowed Sawyer to evolve from being in a learning position to be in a teaching one, obtaining the capability of assisting inexperienced co-workers.Nos últimos anos, tem havido uma crescente procura por robôs colaborativos capazes de interagir e cooperar com pessoas comuns em vários ambientes, partilhando espaço físico e trabalhando em conjunto, tanto em ambientes industriais como domésticos. Em alguns cenários, estes robôs serão confrontados com tarefas que não podem ser previamente planeadas, o que resulta numa necessidade de existir flexibilidade e adaptação ao ambiente que se encontra em constante mudança. Esta dissertação pretende dotar robôs com a capacidade de adquirir conhecimento de tarefas sequenciais utilizando técnicas de Programação por Demonstração. De forma a continuar o trabalho desenvolvido no Laboratório de Robótica Móvel e Antropomórfica da Universidade do Minho, esta dissertação visa estender os modelos de aprendizagem previamente desenvolvidos ao robô colaborativo Sawyer, que é um dos mais recentes no mercado. O principal objetivo foi dotar o robô com a capacidade de aprender tarefas sequenciais por demonstração, através de um processo natural e eficiente. O trabalho desenvolvido pode ser dividido em três tarefas principais: (1) em primeiro lugar, uma arquitetura de controlo baseada em modelos neurocognitivos, desenvolvida anteriormente, para aprender a estrutura de uma tarefa sequencial foi implementada e testada no robô Sawyer, conjugada com um mecanismo de Short Term Memory que permitiu memorizar uma sequência apenas com uma demonstração, para reduzir o número de demonstrações necessárias; (2) em segundo lugar, o modelo anterior foi estendido para englobar informação acerca do espaço de trabalho e seleção de ações num cenário de Colaboração Humano-Robô em que ambos coordenam as suas ações para construir a tarefa; (3) em terceiro lugar, o mecanismo de Short-Term Memory foi também estendido para memorizar informação ordinal e temporal de uma sequência de passos demonstrada por tutores com comportamentos temporais diferentes. Os modelos implementados contribuíram para uma interação com o robô mais intuitiva e prática para os co-workers humanos. O mecanismo de Short-Term Memory permitiu que a aprendizagem fosse realizada a partir de poucas demonstrações, para cumprir com o requisito de ser um método de aprendizagem eficiente. Além disso, a informação memorizada permitiu ao Sawyer evoluir de uma posição de aprendizagem para uma posição em que é capaz de instruir co-workers inexperientes.This work was carried out within the scope of the project “PRODUTECH SIF - Soluções para a Indústria do Futuro”, reference POCI-01-0247-FEDER-024541, cofunded by “Fundo Europeu de Desenvolvimento Regional (FEDER)”, through “Programa Operacional Competitividade e Internacionalização (POCI)”

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Comment: Revised for CB
    corecore