190 research outputs found
Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning
Cortical plasticity is one of the main features that enable our ability to
learn and adapt in our environment. Indeed, the cerebral cortex self-organizes
itself through structural and synaptic plasticity mechanisms that are very
likely at the basis of an extremely interesting characteristic of the human
brain development: the multimodal association. In spite of the diversity of the
sensory modalities, like sight, sound and touch, the brain arrives at the same
concepts (convergence). Moreover, biological observations show that one
modality can activate the internal representation of another modality when both
are correlated (divergence). In this work, we propose the Reentrant
Self-Organizing Map (ReSOM), a brain-inspired neural system based on the
reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose
and compare different computational methods for unsupervised learning and
inference, then quantify the gain of the ReSOM in a multimodal classification
task. The divergence mechanism is used to label one modality based on the
other, while the convergence mechanism is used to improve the overall accuracy
of the system. We perform our experiments on a constructed written/spoken
digits database and a DVS/EMG hand gestures database. The proposed model is
implemented on a cellular neuromorphic architecture that enables distributed
computing with local connectivity. We show the gain of the so-called hardware
plasticity induced by the ReSOM, where the system's topology is not fixed by
the user but learned along the system's experience through self-organization.Comment: Preprin
Imitation and Mirror Systems in Robots through Deep Modality Blending Networks
Learning to interact with the environment not only empowers the agent with
manipulation capability but also generates information to facilitate building
of action understanding and imitation capabilities. This seems to be a strategy
adopted by biological systems, in particular primates, as evidenced by the
existence of mirror neurons that seem to be involved in multi-modal action
understanding. How to benefit from the interaction experience of the robots to
enable understanding actions and goals of other agents is still a challenging
question. In this study, we propose a novel method, deep modality blending
networks (DMBN), that creates a common latent space from multi-modal experience
of a robot by blending multi-modal signals with a stochastic weighting
mechanism. We show for the first time that deep learning, when combined with a
novel modality blending scheme, can facilitate action recognition and produce
structures to sustain anatomical and effect-based imitation capabilities. Our
proposed system, can be conditioned on any desired sensory/motor value at any
time-step, and can generate a complete multi-modal trajectory consistent with
the desired conditioning in parallel avoiding accumulation of prediction
errors. We further showed that given desired images from different
perspectives, i.e. images generated by the observation of other robots placed
on different sides of the table, our system could generate image and joint
angle sequences that correspond to either anatomical or effect based imitation
behavior. Overall, the proposed DMBN architecture not only serves as a
computational model for sustaining mirror neuron-like capabilities, but also
stands as a powerful machine learning architecture for high-dimensional
multi-modal temporal data with robust retrieval capabilities operating with
partial information in one or multiple modalities
An integrated theory of language production and comprehension
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal
WEHST: Wearable Engine for Human-Mediated Telepresence
This dissertation reports on the industrial design of a wearable computational device created to enable better emergency medical intervention for situations where electronic remote assistance is necessary. The design created for this doctoral project, which assists practices by paramedics with mandates for search-and-rescue (SAR) in hazardous environments, contributes to the field of human-mediated teleparamedicine (HMTPM). Ethnographic and industrial design aspects of this research considered the intricate relationships at play in search-and-rescue operations, which lead to the design of the system created for this project known as WEHST: Wearable Engine for Human-Mediated Telepresence. Three case studies of different teams were carried out, each focusing on making improvements to the practices of teams of paramedics and search-and-rescue technicians who use combinations of ambulance, airplane, and helicopter transport in specific chemical, biological, radioactive, nuclear and explosive (CBRNE) scenarios. The three paramedicine groups included are the Canadian Air Force 442 Rescue Squadron, Nelson Search and Rescue, and the British Columbia Ambulance Service Infant Transport Team. Data was gathered over a seven-year period through a variety of methods including observation, interviews, examination of documents, and industrial design. The data collected included physiological, social, technical, and ecological information about the rescuers. Actor-network theory guided the research design, data analysis, and design synthesis. All of this leads to the creation of the WEHST system. As identified, the WEHST design created in this dissertation project addresses the difficulty case-study participants found in using their radios in hazardous settings. As the research identified, a means of controlling these radios without depending on hands, voice, or speech would greatly improve communication, as would wearing sensors and other computing resources better linking operators, radios, and environments. WEHST responds to this need. WEHST is an instance of industrial design for a wearable “engine” for human-situated telepresence that includes eight interoperable families of wearable electronic modules and accompanying textiles. These make up a platform technology for modular, scalable and adaptable toolsets for field practice, pedagogy, or research. This document details the considerations that went into the creation of the WEHST design
Affective Computing
This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
Systèmes cognitifs artificiels : du concept au développement de comportements intelligents en robotique autonome
Les travaux présentés dans le cadre de cette habilitation à diriger des recherches s’appuient sur le principe de la robotique développementale et plus particulièrement sur le paradigme de l’énaction. L’idée n’est donc pas de développer un robot intelligent, mais plutôt de concevoir un robot qui soit capable de le devenir. L’originalité du travail présenté dans ce mémoire repose sur le fait que le système cognitif artificiel est décomposé en deux parties distinctes : la première regroupe des processus cognitifs « inconscients » et la deuxième concerne les processus cognitifs « conscients ». Les processus cognitifs inconscients correspondent aux aptitudes (pré-programmées ou apprises) fonctionnant de manière quasi-automatique, alors que les processus cognitifs conscients contribuent au développement et à l’apprentissage de nouvelles aptitudes. La cognition associée au robot est donc le résultat d’un processus de développement par lequel le robot devient progressivement plus habile et acquiert les connaissances lui permettant d’interpréter le monde qui l’entoure.Ce mémoire se décompose en trois grandes parties. La première partie correspond à un curriculum vitae détaillé présentant l’ensemble de mon parcours professionnel. La deuxième partie est consacrée à la présentation plus approfondie de mes activités de recherches qui se sont focalisées sur le développement de systèmes cognitifs artificiels appliqués à la robotique avec des applications dans les domaines de la locomotion bipède, la perception et l’acquisition autonome de connaissances ainsi que les systèmes multi-robots et l’intelligence distribuée. Enfin, la troisième partie est une compilation de quatre articles de revue représentatives de l’ensemble de mes travaux de recherches
Visual Neuroscience of Robotic Grasping
Supporting Informatio
- …