722 research outputs found

    Predictive Coding for Dynamic Visual Processing: Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model

    Get PDF
    The current paper proposes a novel predictive coding type neural network model, the predictive multiple spatio-temporal scales recurrent neural network (P-MSTRNN). The P-MSTRNN learns to predict visually perceived human whole-body cyclic movement patterns by exploiting multiscale spatio-temporal constraints imposed on network dynamics by using differently sized receptive fields as well as different time constant values for each layer. After learning, the network becomes able to proactively imitate target movement patterns by inferring or recognizing corresponding intentions by means of the regression of prediction error. Results show that the network can develop a functional hierarchy by developing a different type of dynamic structure at each layer. The paper examines how model performance during pattern generation as well as predictive imitation varies depending on the stage of learning. The number of limit cycle attractors corresponding to target movement patterns increases as learning proceeds. And, transient dynamics developing early in the learning process successfully perform pattern generation and predictive imitation tasks. The paper concludes that exploitation of transient dynamics facilitates successful task performance during early learning periods.Comment: Accepted in Neural Computation (MIT press

    Self-Organizing Map Neural Architectures Based on Limit Cycle Attractors

    Get PDF
    Recent efforts to develop large-scale neural architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason is that most conventional SOMs use a static encoding representation: Each input is typically represented by the fixed activation of a single node in the map layer. This not only carries information in an inefficient and unreliable way that impedes building robust multi-SOM neural architectures, but it is also inconsistent with rhythmic oscillations in biological neural networks. Here I develop and study an alternative encoding scheme that instead uses limit cycle attractors of multi-focal activity patterns to represent input patterns/sequences. Such a fundamental change in representation raises several questions: Can this be done effectively and reliably? If so, will map formation still occur? What properties would limit cycle SOMs exhibit? Could multiple such SOMs interact effectively? Could robust architectures based on such SOMs be built for practical applications? The principal results of examining these questions are as follows. First, conditions are established for limit cycle attractors to emerge in a SOM through self-organization when encoding both static and temporal sequence inputs. It is found that under appropriate conditions a set of learned limit cycles are stable, unique, and preserve input relationships. In spite of the continually changing activity in a limit cycle SOM, map formation continues to occur reliably. Next, associations between limit cycles in different SOMs are learned. It is shown that limit cycles in one SOM can be successfully retrieved by another SOM’s limit cycle activity. Control timings can be set quite arbitrarily during both training and activation. Importantly, the learned associations generalize to new inputs that have never been seen during training. Finally, a complete neural architecture based on multiple limit cycle SOMs is presented for robotic arm control. This architecture combines open-loop and closed-loop methods to achieve high accuracy and fast movements through smooth trajectories. The architecture is robust in that disrupting or damaging the system in a variety of ways does not completely destroy the system. I conclude that limit cycle SOMs have great potentials for use in constructing robust neural architectures

    The Self-Organization of Speech Sounds

    Get PDF
    The speech code is a vehicle of language: it defines a set of forms used by a community to carry information. Such a code is necessary to support the linguistic interactions that allow humans to communicate. How then may a speech code be formed prior to the existence of linguistic interactions? Moreover, the human speech code is discrete and compositional, shared by all the individuals of a community but different across communities, and phoneme inventories are characterized by statistical regularities. How can a speech code with these properties form? We try to approach these questions in the paper, using the ``methodology of the artificial''. We build a society of artificial agents, and detail a mechanism that shows the formation of a discrete speech code without pre-supposing the existence of linguistic capacities or of coordinated interactions. The mechanism is based on a low-level model of sensory-motor interactions. We show that the integration of certain very simple and non language-specific neural devices leads to the formation of a speech code that has properties similar to the human speech code. This result relies on the self-organizing properties of a generic coupling between perception and production within agents, and on the interactions between agents. The artificial system helps us to develop better intuitions on how speech might have appeared, by showing how self-organization might have helped natural selection to find speech

    Emergent coordination between humans and robots

    Get PDF
    Emergent coordination or movement synchronization is an often observed phenomenon in human behavior. Humans synchronize their gait when walking next to each other, they synchronize their postural sway when standing closely, and they also synchronize their movement behavior in many other situations of daily life. Why humans are doing this is an important question of ongoing research in many disciplines: apparently movement synchronization plays a role in children’s development and learning; it is related to our social and emotional behavior in interaction with others; it is an underlying principle in the organization of communication by means of language and gesture; and finally, models explaining movement synchronization between two individuals can also be extended to group behavior. Overall, one can say that movement synchronization is an important principle of human interaction behavior. Besides interacting with other humans, in recent years humans do more and more interact with technology. This was first expressed in the interaction with machines in industrial settings, was taken further to human-computer interaction and is now facing a new challenge: the interaction with active and autonomous machines, the interaction with robots. If the vision of today’s robot developers comes true, in the near future robots will be fully integrated not only in our workplace, but also in our private lives. They are supposed to support humans in activities of daily living and even care for them. These circumstances however require the development of interactional principles which the robot can apply to the direct interaction with humans. In this dissertation the problem of robots entering the human society will be outlined and the need for the exploration of human interaction principles that are transferable to human-robot interaction will be emphasized. Furthermore, an overview on human movement synchronization as a very important phenomenon in human interaction will be given, ranging from neural correlates to social behavior. The argument of this dissertation is that human movement synchronization is a simple but striking human interaction principle that can be applied in human-robot interaction to support human activity of daily living, demonstrated on the example of pick-and-place tasks. This argument is based on five publications. In the first publication, human movement synchronization is explored in goal-directed tasks which bare similar requirements as pick-and-place tasks in activities of daily living. In order to explore if a merely repetitive action of the robot is sufficient to encourage human movement synchronization, the second publication reports a human-robot interaction study in which a human interacts with a non-adaptive robot. Here however, movement synchronization between human and robot does not emerge, which underlines the need for adaptive mechanisms. Therefore, in the third publication, human adaptive behavior in goal-directed movement synchronization is explored. In order to make the findings from the previous studies applicable to human-robot interaction, in the fourth publication the development of an interaction model based on dynamical systems theory is outlined which is ready for implementation on a robotic platform. Following this, a brief overview on a first human-robot interaction study based on the developed interaction model is provided. The last publication describes an extension of the previous approach which also includes the human tendency to make use of events to adapt their movements to. Here, also a first human-robot interaction study is reported which confirms the applicability of the model. The dissertation concludes with a discussion on the presented findings in the light of human-robot interaction and psychological aspects of joint action research as well as the problem of mutual adaptation.Spontan auftretende Koordination oder Bewegungssynchronisierung ist ein häufig zu beobachtendes Phänomen im Verhalten von Menschen. Menschen synchronisieren ihre Schritte beim nebeneinander hergehen, sie synchronisieren die Schwingbewegung zum Ausgleich der Körperbalance wenn sie nahe beieinander stehen und sie synchronisieren ihr Bewegungsverhalten generell in vielen weiteren Handlungen des täglichen Lebens. Die Frage nach dem warum ist eine Frage mit der sich die Forschung in der Psychologie, Neuro- und Bewegungswissenschaft aber auch in der Sozialwissenschaft nach wie vor beschäftigt: offenbar spielt die Bewegungssynchronisierung eine Rolle in der kindlichen Entwicklung und beim Erlernen von Fähigkeiten und Verhaltensmustern; sie steht in direktem Bezug zu unserem sozialen Verhalten und unserer emotionalen Wahrnehmung in der Interaktion mit Anderen; sie ist ein grundlegendes Prinzip in der Organisation von Kommunikation durch Sprache oder Gesten; außerdem können Modelle, die Bewegungssynchronisierung zwischen zwei Individuen erklären, auch auf das Verhalten innerhalb von Gruppen ausgedehnt werden. Insgesamt kann man also sagen, dass Bewegungssynchronisierung ein wichtiges Prinzip im menschlichen Interaktionsverhalten darstellt. Neben der Interaktion mit anderen Menschen interagieren wir in den letzten Jahren auch zunehmend mit der uns umgebenden Technik. Hier fand zunächst die Interaktion mit Maschinen im industriellen Umfeld Beachtung, später die Mensch-Computer-Interaktion. Seit kurzem sind wir jedoch mit einer neuen Herausforderung konfrontiert: der Interaktion mit aktiven und autonomen Maschinen, Maschinen die sich bewegen und aktiv mit Menschen interagieren, mit Robotern. Sollte die Vision der heutigen Roboterentwickler Wirklichkeit werde, so werden Roboter in der nahen Zukunft nicht nur voll in unser Arbeitsumfeld integriert sein, sondern auch in unser privates Leben. Roboter sollen den Menschen in ihren täglichen Aktivitäten unterstützen und sich sogar um sie kümmern. Diese Umstände erfordern die Entwicklung von neuen Interaktionsprinzipien, welche Roboter in der direkten Koordination mit dem Menschen anwenden können. In dieser Dissertation wird zunächst das Problem umrissen, welches sich daraus ergibt, dass Roboter zunehmend Einzug in die menschliche Gesellschaft finden. Außerdem wird die Notwendigkeit der Untersuchung menschlicher Interaktionsprinzipien, die auf die Mensch-Roboter-Interaktion transferierbar sind, hervorgehoben. Die Argumentation der Dissertation ist, dass die menschliche Bewegungssynchronisierung ein einfaches aber bemerkenswertes menschliches Interaktionsprinzip ist, welches in der Mensch-Roboter-Interaktion angewendet werden kann um menschliche Aktivitäten des täglichen Lebens, z.B. Aufnahme-und-Ablege-Aufgaben (pick-and-place tasks), zu unterstützen. Diese Argumentation wird auf fünf Publikationen gestützt. In der ersten Publikation wird die menschliche Bewegungssynchronisierung in einer zielgerichteten Aufgabe untersucht, welche die gleichen Anforderungen erfüllt wie die Aufnahme- und Ablageaufgaben des täglichen Lebens. Um zu untersuchen ob eine rein repetitive Bewegung des Roboters ausreichend ist um den Menschen zur Etablierung von Bewegungssynchronisierung zu ermutigen, wird in der zweiten Publikation eine Mensch-Roboter-Interaktionsstudie vorgestellt in welcher ein Mensch mit einem nicht-adaptiven Roboter interagiert. In dieser Studie wird jedoch keine Bewegungssynchronisierung zwischen Mensch und Roboter etabliert, was die Notwendigkeit von adaptiven Mechanismen unterstreicht. Daher wird in der dritten Publikation menschliches Adaptationsverhalten in der Bewegungssynchronisierung in zielgerichteten Aufgaben untersucht. Um die so gefundenen Mechanismen für die Mensch-Roboter Interaktion nutzbar zu machen, wird in der vierten Publikation die Entwicklung eines Interaktionsmodells basierend auf Dynamischer Systemtheorie behandelt. Dieses Modell kann direkt in eine Roboterplattform implementiert werden. Anschließend wird kurz auf eine erste Studie zur Mensch- Roboter Interaktion basierend auf dem entwickelten Modell eingegangen. Die letzte Publikation beschreibt eine Weiterentwicklung des bisherigen Vorgehens welche der Tendenz im menschlichen Verhalten Rechnung trägt, die Bewegungen an Ereignissen auszurichten. Hier wird außerdem eine erste Mensch-Roboter- Interaktionsstudie vorgestellt, die die Anwendbarkeit des Modells bestätigt. Die Dissertation wird mit einer Diskussion der präsentierten Ergebnisse im Kontext der Mensch-Roboter-Interaktion und psychologischer Aspekte der Interaktionsforschung sowie der Problematik von beiderseitiger Adaptivität abgeschlossen

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    A cultured human neural network operates a robotic actuator

    Get PDF
    The development of bio-electronic prostheses, hybrid human-electronics devices and bionic robots has been the aim of many researchers. Although neurophysiologic processes have been widely investigated and bio-electronics has developed rapidly, the dynamics of a biological neuronal network that receive sensory inputs, store and control information is not yet understood. Toward this end, we have taken an interdisciplinary approach to study the learning and response of biological neural networks to complex stimulation patterns. This paper describes the design, execution, and results of several experiments performed in order to investigate the behavior of complex interconnected structures found in biological neural networks. The experimental design consisted of biological human neurons stimulated by parallel signal patterns intended to simulate complex perceptions. The response patterns were analyzed with an innovative artificial neural network (ANN), called ITSOM (Inductive Tracing Self Organizing Map). This system allowed us to decode the complex neural responses from a mixture of different stimulations and learned memory patterns inherent in the cell colonies. In the experiment described in this work, neurons derived from human neural stem cells were connected to a robotic actuator through the ANN analyzer to demonstrate our ability to produce useful control from simulated perceptions stimulating the cells. Preliminary results showed that in vitro human neuron colonies can learn to reply selectively to different stimulation patterns and that response signals can effectively be decoded to operate a minirobot. Lastly the fascinating performance of the hybrid system is evaluated quantitatively and potential future work is discussed

    Chaotic exploration and learning of locomotor behaviours

    Get PDF
    Recent developments in the embodied approach to understanding the generation of adaptive behaviour, suggests that the design of adaptive neural circuits for rhythmic motor patterns should not be done in isolation from an appreciation, and indeed exploitation, of neural-body-environment interactions. Utilising spontaneous mutual entrainment between neural systems and physical bodies provides a useful passage to the regions of phase space which are naturally structured by the neuralbody- environmental interactions. A growing body of work has provided evidence that chaotic dynamics can be useful in allowing embodied systems to spontaneously explore potentially useful motor patterns. However, up until now there has been no general integrated neural system that allows goal-directed, online, realtime exploration and capture of motor patterns without recourse to external monitoring, evaluation or training methods. For the first time, we introduce such a system in the form of a fully dynamic neural system, exploiting intrinsic chaotic dynamics, for the exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modelled as a network of neural oscillators which are coupled only through physical embodiment, and goal directed exploration of coordinated motor patterns is achieved by a chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organised dynamics each of which is a candidate for a locomotion behaviour. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states using its intrinsic chaotic dynamics as a driving force and stabilises the system on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced which results in an increased diversity of motor outputs, thus achieving multi-scale exploration. A rhythmic pattern discovered by this process is memorised and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronisation method. The dynamical nature of the weak coupling through physical embodiment allows this adaptive weight learning to be easily integrated, thus forming a continuous exploration-learning system. Our result shows that the novel neuro-robotic system is able to create and learn a number of emergent locomotion behaviours for a wide range of body configurations and physical environment, and can re-adapt after sustaining damage. The implications and analyses of these results for investigating the generality and limitations of the proposed system are discussed

    Opinions and Outlooks on Morphological Computation

    Get PDF
    Morphological Computation is based on the observation that biological systems seem to carry out relevant computations with their morphology (physical body) in order to successfully interact with their environments. This can be observed in a whole range of systems and at many different scales. It has been studied in animals – e.g., while running, the functionality of coping with impact and slight unevenness in the ground is "delivered" by the shape of the legs and the damped elasticity of the muscle-tendon system – and plants, but it has also been observed at the cellular and even at the molecular level – as seen, for example, in spontaneous self-assembly. The concept of morphological computation has served as an inspirational resource to build bio-inspired robots, design novel approaches for support systems in health care, implement computation with natural systems, but also in art and architecture. As a consequence, the field is highly interdisciplinary, which is also nicely reflected in the wide range of authors that are featured in this e-book. We have contributions from robotics, mechanical engineering, health, architecture, biology, philosophy, and others

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl
    • …
    corecore