2,391 research outputs found

    Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning

    Full text link
    Cortical plasticity is one of the main features that enable our ability to learn and adapt in our environment. Indeed, the cerebral cortex self-organizes itself through structural and synaptic plasticity mechanisms that are very likely at the basis of an extremely interesting characteristic of the human brain development: the multimodal association. In spite of the diversity of the sensory modalities, like sight, sound and touch, the brain arrives at the same concepts (convergence). Moreover, biological observations show that one modality can activate the internal representation of another modality when both are correlated (divergence). In this work, we propose the Reentrant Self-Organizing Map (ReSOM), a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose and compare different computational methods for unsupervised learning and inference, then quantify the gain of the ReSOM in a multimodal classification task. The divergence mechanism is used to label one modality based on the other, while the convergence mechanism is used to improve the overall accuracy of the system. We perform our experiments on a constructed written/spoken digits database and a DVS/EMG hand gestures database. The proposed model is implemented on a cellular neuromorphic architecture that enables distributed computing with local connectivity. We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system's topology is not fixed by the user but learned along the system's experience through self-organization.Comment: Preprin

    Active Perception by Interaction with Other Agents in a Predictive Coding Framework: Application to Internet of Things Environment

    Get PDF
    Predicting the state of an agent\u27s partially-observable environment is a problem of interest in many domains. Typically in the real world, the environment consists of multiple agents, not necessarily working towards a common goal. Though the goal and sensory observation for each agent is unique, one agent might have acquired some knowledge that may benefit the other. In essence, the knowledge base regarding the environment is distributed among the agents. An agent can sample this distributed knowledge base by communicating with other agents. Since an agent is not storing the entire knowledge base, its model can be small and its inference can be efficient and fault-tolerant. However, the agent needs to learn -- when, with whom and what -- to communicate (in general interact) under different situations.This dissertation presents an agent model that actively and selectively communicates with other agents to predict the state of its environment efficiently. Communication is a challenge when the internal models of other agents is unknown and unobservable. The proposed agent learns communication policies as mappings from its belief state to when, with whom and what to communicate. The policies are learned using predictive coding in an online manner, without any reinforcement. The proposed agent model is evaluated on widely-studied applications, such as human activity recognition from multimodal, multisource and heterogeneous sensor data, and transferring knowledge across sensor networks. In the applications, either each sensor or each sensor network is assumed to be monitored by an agent. The recognition accuracy on benchmark datasets is comparable to the state-of-the-art, even though our model has significantly fewer parameters and infers the state in a localized manner. The learned policy reduces number of communications. The agent is tolerant to communication failures and can recognize the reliability of each agent from its communication messages. To the best of our knowledge, this is the first work on learning communication policies by an agent for predicting the state of its environment

    Dynamic Bayesian Collective Awareness Models for a Network of Ego-Things

    Get PDF
    A novel approach is proposed for multimodal collective awareness (CA) of multiple networked intelligent agents. Each agent is here considered as an Internet-of-Things (IoT) node equipped with machine learning capabilities; CA aims to provide the network with updated causal knowledge of the state of execution of actions of each node performing a joint task, with particular attention to anomalies that can arise. Data-driven dynamic Bayesian models learned from multisensory data recorded during the normal realization of a joint task (agent network experience) are used for distributed state estimation of agents and detection of abnormalities. A set of switching dynamic Bayesian network (DBN) models collectively learned in a training phase, each related to particular sensorial modality, is used to allow each agent in the network to perform synchronous estimation of possible abnormalities occurring when a new task of the same type is jointly performed. Collective DBN (CDBN) learning is performed by unsupervised clustering of generalized errors (GEs) obtained from a starting generalized model. A growing neural gas (GNG) algorithm is used as a basis to learn the discrete switching variables at the semantic level. Conditional probabilities linking nodes in the CDBN models are estimated using obtained clusters. CDBN models are associated with a Bayesian inference method, namely, distributed Markov jump particle filter (D-MJPF), employed for joint state estimation and abnormality detection. The effects of networking protocols and of communications in the estimation of state and abnormalities are analyzed. Performance is evaluated by using a small network of two autonomous vehicles performing joint navigation tasks in a controlled environment. In the proposed method, first the sharing of observations is considered in ideal condition, and then the effects of a wireless communication channel have been analyzed for the collective abnormality estimation of the agents. Rician wireless channel and the usage of two protocols (i.e., IEEE 802.11p and IEEE 802.15.4) along with different channel conditions are considered as well

    Sensor Fusion in the Perception of Self-Motion

    No full text
    This dissertation has been written at the Max Planck Institute for Biological Cybernetics (Max-Planck-Institut fĂĽr Biologische Kybernetik) in TĂĽbingen in the department of Prof. Dr. Heinrich H. BĂĽlthoff. The work has universitary support by Prof. Dr. GĂĽnther Palm (University of Ulm, Abteilung Neuroinformatik). Main evaluators are Prof. Dr. GĂĽnther Palm, Prof. Dr. Wolfgang Becker (University of Ulm, Sektion Neurophysiologie) and Prof. Dr. Heinrich BĂĽlthoff.amp;lt;bramp;gt;amp;lt;bramp;gt; The goal of this thesis was to investigate the integration of different sensory modalities in the perception of self-motion, by using psychophysical methods. Experiments with healthy human participants were to be designed for and performed in the Motion Lab, which is equipped with a simulator platform and projection screen. Results from psychophysical experiments should be used to refine models of the multisensory integration process, with an mphasis on Bayesian (maximum likelihood) integration mechanisms.amp;lt;bramp;gt;amp;lt;bramp;gt; To put the psychophysical experiments into the larger framework of research on multisensory integration in the brain, results of neuroanatomical and neurophysiological experiments on multisensory integration are also reviewed

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Psychophysiology-based QoE assessment : a survey

    Get PDF
    We present a survey of psychophysiology-based assessment for quality of experience (QoE) in advanced multimedia technologies. We provide a classification of methods relevant to QoE and describe related psychological processes, experimental design considerations, and signal analysis techniques. We summarize multimodal techniques and discuss several important aspects of psychophysiology-based QoE assessment, including the synergies with psychophysical assessment and the need for standardized experimental design. This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
    • …
    corecore