2,312 research outputs found

    Higher coordination with less control - A result of information maximization in the sensorimotor loop

    Full text link
    This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, non-communicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared to a maximization per robot. Another focus of this paper is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process

    Why study movement variability in autism?

    Get PDF
    Autism has been defined as a disorder of social cognition, interaction and communication where ritualistic, repetitive behaviors are commonly observed. But how should we understand the behavioral and cognitive differences that have been the main focus of so much autism research? Can high-level cognitive processes and behaviors be identified as the core issues people with autism face, or do these characteristics perhaps often rather reflect individual attempts to cope with underlying physiological issues? Much research presented in this volume will point to the latter possibility, i.e. that people on the autism spectrum cope with issues at much lower physiological levels pertaining not only to Central Nervous Systems (CNS) function, but also to peripheral and autonomic systems (PNS, ANS) (Torres, Brincker, et al. 2013). The question that we pursue in this chapter is what might be fruitful ways of gaining objective measures of the large-scale systemic and heterogeneous effects of early atypical neurodevelopment; how to track their evolution over time and how to identify critical changes along the continuum of human development and aging. We suggest that the study of movement variability—very broadly conceived as including all minute fluctuations in bodily rhythms and their rates of change over time (coined micro-movements (Figure 1A-B) (Torres, Brincker, et al. 2013))—offers a uniquely valuable and entirely objectively quantifiable lens to better assess, understand and track not only autism but cognitive development and degeneration in general. This chapter presents the rationale firstly behind this focus on micro-movements and secondly behind the choice of specific kinds of data collection and statistical metrics as tools of analysis (Figure 1C). In brief the proposal is that the micro-movements (defined in Part I – Chapter 1), obtained using various time scales applied to different physiological data-types (Figure 1), contain information about layered influences and temporal adaptations, transformations and integrations across anatomically semi-independent subsystems that crosstalk and interact. Further, the notion of sensorimotor re-afference is used to highlight the fact that these layered micro-motions are sensed and that this sensory feedback plays a crucial role in the generation and control of movements in the first place. In other words, the measurements of various motoric and rhythmic variations provide an access point not only to the “motor systems”, but also access to much broader central and peripheral sensorimotor and regulatory systems. Lastly, we posit that this new lens can also be used to capture influences from systems of multiple entry points or collaborative control and regulation, such as those that emerge during dyadic social interactions

    Chance, long tails, and inference: a non-Gaussian, Bayesian theory of vocal learning in songbirds

    Full text link
    Traditional theories of sensorimotor learning posit that animals use sensory error signals to find the optimal motor command in the face of Gaussian sensory and motor noise. However, most such theories cannot explain common behavioral observations, for example that smaller sensory errors are more readily corrected than larger errors and that large abrupt (but not gradually introduced) errors lead to weak learning. Here we propose a new theory of sensorimotor learning that explains these observations. The theory posits that the animal learns an entire probability distribution of motor commands rather than trying to arrive at a single optimal command, and that learning arises via Bayesian inference when new sensory information becomes available. We test this theory using data from a songbird, the Bengalese finch, that is adapting the pitch (fundamental frequency) of its song following perturbations of auditory feedback using miniature headphones. We observe the distribution of the sung pitches to have long, non-Gaussian tails, which, within our theory, explains the observed dynamics of learning. Further, the theory makes surprising predictions about the dynamics of the shape of the pitch distribution, which we confirm experimentally

    Online Learning of a Memory for Learning Rates

    Full text link
    The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.Comment: accepted to ICRA 2018, code available: https://github.com/fmeier/online-meta-learning ; video pitch available: https://youtu.be/9PzQ25FPPO

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Continual Lifelong Learning with Neural Networks: A Review

    Full text link
    Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration
    corecore