190 research outputs found
Learning from sensory predictions for autonomous and adaptive exploration of object shape with a tactile robot
Humans use information from sensory predictions, together with current observations, for the optimal exploration and recognition of their surrounding environment. In this work, two novel adaptive perception strategies are proposed for accurate and fast exploration of object shape with a robotic tactile sensor. These strategies called (1) adaptive weighted prior and (2) adaptive weighted posterior, combine tactile sensory predictions and current sensor observations to autonomously adapt the accuracy and speed of active Bayesian perception in object exploration tasks. Sensory predictions, obtained from a forward model, use a novel Predicted Information Gain method. These predictions are used by the tactile robot to analyse ‘what would have happened’ if certain decisions ‘would have been made’ at previous decision times. The accuracy of predictions is evaluated and controlled by a confidence parameter, to ensure that the adaptive perception strategies rely more on predictions when they are accurate, and more on current sensory observations otherwise. This work is systematically validated with the recognition of angle and position data extracted from the exploration of object shape, using a biomimetic tactile sensor and a robotic platform. The exploration task implements the contour following procedure used by humans to extract object shape with the sense of touch. The validation process is performed with the adaptive weighted strategies and active perception alone. The adaptive approach achieved higher angle accuracy (2.8 deg) over active perception (5 deg). The position accuracy was similar for all perception methods (0.18 mm). The reaction time or number of tactile contacts, needed by the tactile robot to make a decision, was improved by the adaptive perception (1 tap) over active perception (5 taps). The results show that the adaptive perception strategies can enable future robots to adapt their performance, while improving the trade-off between accuracy and reaction time, for tactile exploration, interaction and recognition tasks
Learning from sensory predictions for autonomous and adaptive exploration of object shape with a tactile robot
Humans use information from sensory predictions, together withcurrent observations, for the optimal exploration and recognition oftheir surrounding environment. In this work, two novel adaptiveperception strategies are proposed for accurate and fast exploration ofobject shape with a robotic tactile sensor. These strategies called 1)adaptive weighted prior and 2) adaptive weighted posterior, combinetactile sensory predictions and current sensor observations toautonomously adapt the accuracy and speed of active Bayesian perceptionin object exploration tasks. Sensory predictions, obtained from a forwardmodel, use a novel Predicted Information Gain method. These predictionsare used by the tactile robot to analyse `what would have happened' ifcertain decisions `would have been made' at previous decision times. Theaccuracy of predictions is evaluated and controlled by a confidenceparameter, to ensure that the adaptive perception strategies rely more onpredictions when they are accurate, and more on current sensoryobservations otherwise. This work is systematically validated with therecognition of angle and position data extracted from the exploration ofobject shape, using a biomimetic tactile sensor and a robotic platform.The exploration task implements the contour following procedure used byhumans to extract object shape with the sense of touch. The validationprocess is performed with the adaptive weighted strategies and activeperception alone. The adaptive approach achieved higher angle accuracy(2.8 deg) over active perception (5 deg). The position accuracy wassimilar for all perception methods (0.18 mm). The reaction time or numberof tactile contacts, needed by the tactile robot to make a decision, wasimproved by the adaptive perception (1 tap) over active perception (5taps). The results show that the adaptive perception strategies canenable future robots to adapt their performance, while improving thetrade-off between accuracy and reaction time, for tactile exploration,interaction and recognition tasks
Edge and plane classification with a biomimetic iCub fingertip sensor
The exploration and interaction of humanoid robots with the environment through tactile sensing is an important task for achieving truly autonomous agents. Recently much research has been focused on the development of new technologies for tactile sensors and new methods for tactile exploration. Edge detection is one of the tasks required in robots and humanoids to explore and recognise objects. In this work we propose a method for edge and plane classification with a biomimetic iCub fingertip using a probabilistic approach. The iCub fingertip mounted on an xy-table robot is able to tap and collect the data from the surface and edge of a plastic wall. Using a maximum likelihood classifier the xy-table knows when the iCub fingertip has reached the edge of the object. The study presented here is also biologically inspired by the tactile exploration performed in animals
Simultaneous localisation and mapping on a multi-degree of freedom biomimetic whiskered robot
A biomimetic mobile robot called “Shrewbot” has been built as part of a neuroethological study of the mammalian facial whisker sensory system. This platform has been used to further evaluate the problem space of whisker based tactile Simultaneous Localisation And Mapping (tSLAM). Shrewbot uses a biomorphic 3-dimensional array of active whiskers and a model of action selection based on tactile sensory attention to explore a circular walled arena sparsely populated with simple geometric shapes. Datasets taken during this exploration have been used to parameterise an approach to localisation and mapping based on probabilistic occupancy grids. We present the results of this work and conclude that simultaneous localisation and mapping is possible given only noisy odometry and tactile information from a 3-dimensional array of active biomimetic whiskers and no prior information of features in the environment
Recommended from our members
Memory and mental time travel in humans and social robots.
From neuroscience, brain imaging and the psychology of memory, we are beginning to assemble an integrated theory of the brain subsystems and pathways that allow the compression, storage and reconstruction of memories for past events and their use in contextualizing the present and reasoning about the future-mental time travel (MTT). Using computational models, embedded in humanoid robots, we are seeking to test the sufficiency of this theoretical account and to evaluate the usefulness of brain-inspired memory systems for social robots. In this contribution, we describe the use of machine learning techniques-Gaussian process latent variable models-to build a multimodal memory system for the iCub humanoid robot and summarize results of the deployment of this system for human-robot interaction. We also outline the further steps required to create a more complete robotic implementation of human-like autobiographical memory and MTT. We propose that generative memory models, such as those that form the core of our robot memory system, can provide a solution to the symbol grounding problem in embodied artificial intelligence. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.Funding. The preparation of this chapter was supported by funding
from the EU Seventh Framework Programme as part of the projects
Experimental Functional Android Assistant (EFAA, FP7-ICT-270490)
and What You Say Is What You Did (WYSIWYD, FP7-ICT-612139)
and by the EU H2020 Programme as part of the Human Brain Project
(HBP-SGA1, 720270; HBP-SGA2, 785907).
Acknowledgements. The authors are grateful to Paul Verschure, Peter
Dominey, Giorgio Metta, Yiannis Demiris and the other members
of the WYSIWYD and EFAA consortia; to members of the HBP EPISENSE
group; and to our colleagues at the University of Sheffield
who have helped us to develop memory systems for the iCub, particularly
Luke Boorman, Harry Jackson and Matthew Evans. The
Sheffield iCub was purchased with the support of the UK Engineering
and Physical Sciences Research Council (EPSRC)
The effect of whisker movement on radial distanceestimation: A case study in comparative robotics
Whisker movement has been shown to be under active control in certain specialistanimals such as rats and mice. Though this whisker movement is well characterized,the role and effect of this movement on subsequent sensing is poorly understood. Onemethod for investigating this phenomena is to generate artificial whisker deflections withrobotic hardware under different movement conditions. A limitation of this approachis that assumptions must be made in the design of any artificial whisker actuators,which will impose certain restrictions on the whisker-object interaction. In this paperwe present three robotic whisker platforms, each with different mechanical whiskerproperties and actuation mechanisms. A feature-based classifier is used to simultaneouslydiscriminate radial distance to contact and contact speed for the first time. We showthat whisker-object contact speed predictably affects deflection magnitudes, invariantof whisker material or whisker movement trajectory. We propose that rodent whiskercontrol allows the animal to improve sensing accuracy by regulating contact speed inducedtouch-to-touch variability
BRAHMS: Novel middleware for integrated systems computation
Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practical robots. Together with the ubiquitous desire to make good on past software development effort, these trends are throwing up new challenges of intellectual and technological integration (for example across scales, across disciplines, and even across time) - challenges that are unmet by existing software frameworks. Here, we outline these challenges in detail, and go on to describe a newly developed software framework, BRAHMS. that meets them. BRAHMS is a tool for integrating computational process modules into a viable, computable system: its generality and flexibility facilitate integration across barriers, such as those described above, in a coherent and effective way. We go on to describe several cases where BRAHMS has been successfully deployed in practical situations. We also show excellent performance in comparison with a monolithic development approach. Additional benefits of developing in the framework include source code self-documentation, automatic coarse-grained parallelisation, cross-language integration, data logging, performance monitoring, and will include dynamic load-balancing and 'pause and continue' execution. BRAHMS is built on the nascent, and similarly general purpose, model markup language, SystemML. This will, in future, also facilitate repeatability and accountability (same answers ten years from now), transparent automatic software distribution, and interfacing with other SystemML tools. (C) 2009 Elsevier Ltd. All rights reserved
A robot trace maker: modeling the fossil evidence of early invertebrate behavior.
The study of trace fossils, the fossilized remains of animal behavior, reveals interesting parallels with recent research in behavior-based robotics. This article reports robot simulations of the meandering foraging trails left by early invertebrates that demonstrate that such trails can be generated by mechanisms similar to those used for robot wall-following. We conclude with the suggestion that the capacity for intelligent behavior shown by many behavior-based robots is similar to that of animals of the late Precambrian and early Cambrian periods approximately 530 to 565 million years ago
Active sensorimotor control for tactile exploration
In this paper, we present a novel and robust Bayesian approach for autonomous active exploration of unknown objects using tactile perception and sensorimotor control. Despite recent advances in tactile sensing, robust active exploration remains a challenging problem, which is a major hurdle to the practical deployment of tactile sensors in robots. Our proposed approach is based on a Bayesian perception method that actively controls the sensor with local small repositioning movements to reduce perception uncertainty, followed by explorative movements based on the outcome of each perceptual decision making step. Two sensorimotor control strategies are proposed for improving the accuracy and speed of the active exploration that weight the evidence from previous exploratory steps through either a weighted prior or weighted posterior. The methods are validated both off-line and in real-time on a contour following exploratory procedure. Results clearly demonstrate improvements in both accuracy and exploration time when using the proposed active methods compared to passive perception. Our work demonstrates that active perception has the potential to enable robots to perform robust autonomous tactile exploration in natural environments
- …