3,387 research outputs found

    Neuromorphic Computing Systems for Tactile Sensing Perception

    Get PDF
    Touch sensing plays an important role in humans daily life. Tasks like exploring, grasping and manipulating objects deeply rely on it. As such, Robots and hand prosthesis endowed with the sense of touch can better and more easily manipulate objects, and physically collaborate with other agents. Towards this goal, information about touched objects and surfaces has to be inferred from raw data coming from the sensors. The orientation of edges, which is employed as a pre-processing stage in both artificial vision and touch, is a key indication for object discrimination. Inspired on the encoding of edges in human first-order tactile afferents, we developed a biologically inspired, spiking models architecture that mimics human tactile perception with computational primitives that are implementable on low-power subthreshold neuromorphic hardware. The network architecture uses three layers of Leaky Integrate and Fire neurons to distinguish different edge orientations of a bar pressed on the artificial skin of the iCub robot. We demonstrated that the network architecture can learn the appropriate connectivity through unsupervised spike-based learning, and that the number and spatial distribution of sensitive areas within receptive fields are important in edge orientation discrimination. The unconstrained and random structure of the connectivity among layers can produce unbalanced activity in the output neurons, which are driven by a variable amount of synaptic inputs. We explored two different mechanisms of synaptic normalization (weights normalization and homeostasis), defining how this can be useful during the learning phase and inference phase. The network is successfully able to discriminate between 35 orientations of 36 (0 degree to 180 degree with 5 degree step increments) with homeostasis and weights normalization mechanism. Besides edge orientation discrimination, we modified the network architecture to be able to classify six different touch modalities (e.g. poke, press, grab, squeeze, push, and rolling a wheel). We demonstrated the ability of the network to learn appropriate connectivity patterns for the classification, achieving a total accuracy of 88.3 %. Furthermore, another application scenario on the tactile object shapes recognition has been considered because of its importance in robotic manipulation. We illustrated that the network architecture with 2 layers of spiking neurons was able to discriminate the tactile object shapes with accuracy 100 %, after integrating to it an array of 160 piezoresistive tactile sensors where the object shapes are applied

    Efficient Bayesian Exploration for Soft Morphology-Action Co-optimization

    Get PDF
    UK Agriculture and Horticulture Development Board(Project CP 172)AHD

    A biologically inspired meta-control navigation system for the Psikharpax rat robot

    Get PDF
    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics

    DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

    Get PDF
    This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users

    Final report key contents: main results accomplished by the EU-Funded project IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots

    Get PDF
    This document has the goal of presenting the main scientific and technological achievements of the project IM-CLeVeR. The document is organised as follows: 1. Project executive summary: a brief overview of the project vision, objectives and keywords. 2. Beneficiaries of the project and contacts: list of Teams (partners) of the project, Team Leaders and contacts. 3. Project context and objectives: the vision of the project and its overall objectives 4. Overview of work performed and main results achieved: a one page overview of the main results of the project 5. Overview of main results per partner: a bullet-point list of main results per partners 6. Main achievements in detail, per partner: a throughout explanation of the main results per partner (but including collaboration work), with also reference to the main publications supporting them

    Autonomous active exploration for tactile sensing in robotics

    Get PDF
    The sense of touch permits humans to directly touch, feel and perceive the state of their surrounding environment. For an exploration task, humans normally reduce uncertainty by actively moving their hands and fingers towards more interesting locations. This active exploration is a sophisticated procedure that involves sensing and perception processes. In robotics, the sense of touch also plays an important role for the development of intelligent systems capable to safely explore and interact with their environment. However, robust and accurate sensing and perception methods, crucial to exploit the benefits offered by the sense of touch, still represents a major research challenge in the field of robotics. A novel method for sensing and perception in robotics using the sense of touch is developed in this research work. This novel active Bayesian perception method, biologically inspired by humans, demonstrates its superiority over passive perception modality, achieving accurate tactile perception with a biomimetic fingertip sensor. The accurate results are accomplished by the accumulation of evidence through the interaction with the environment, and by actively moving the biomimetic fingertip sensor towards better locations to improve perception as humans do. A contour following exploration, commonly used by humans to extract object shape, was used to validate the proposed method using simulated and real objects. The exploration procedure demonstrated the ability of the tactile sensor to autonomously interact, performing active movements to improve the perception from the contour of the objects being explored, in a natural way as humans do. An investigation of the effects on the perception and decisions taken by the combination of the experience acquired along an exploration task with the active Bayesian perception process is also presented. This investigation, based on two novel sensorimotor control strategies (SMC1 and SMC2), was able to improve the performance in speed and accuracy of the exploration task. To exploit the benefits of the control strategies in a realistic exploration, the learning of a forward model and confidence factor was needed. For that reason, a novel method based on the combination of Predicted Information Gain (PIG) and Dynamic Bayesian Networks (DBN) permitted to achieve an online and adaptive learning of the forward model and confidence factor, allowing to improve the performance of the exploration task for both sensorimotor control strategies. Overall, the novel methods presented in this thesis, validated in simulated and real environments, demonstrated to be robust, accurate and suitable for robots to perform autonomous active perception and exploration using the sense touch
    • …
    corecore