13 research outputs found

    Simultaneous localisation and mapping on a multi-degree of freedom biomimetic whiskered robot

    Get PDF
    A biomimetic mobile robot called “Shrewbot” has been built as part of a neuroethological study of the mammalian facial whisker sensory system. This platform has been used to further evaluate the problem space of whisker based tactile Simultaneous Localisation And Mapping (tSLAM). Shrewbot uses a biomorphic 3-dimensional array of active whiskers and a model of action selection based on tactile sensory attention to explore a circular walled arena sparsely populated with simple geometric shapes. Datasets taken during this exploration have been used to parameterise an approach to localisation and mapping based on probabilistic occupancy grids. We present the results of this work and conclude that simultaneous localisation and mapping is possible given only noisy odometry and tactile information from a 3-dimensional array of active biomimetic whiskers and no prior information of features in the environment

    Advancing whisker based navigation through the implementation of Bio-Inspired whisking strategies

    Get PDF

    The robot vibrissal system: Understanding mammalian sensorimotor co-ordination through biomimetics

    Get PDF
    Chapter 10 The Robot Vibrissal System: Understanding Mammalian Sensorimotor Co-ordination Through Biomimetics Tony J. Prescott, Ben Mitchinson, Nathan F. Lepora, Stuart P. Wilson, Sean R. Anderson, John Porrill, Paul Dean, Charles ..

    Visual-tactile sensory map calibration of a biomimetic whiskered robot

    Get PDF
    © 2016 IEEE. We present an adaptive filter model of cerebellar function applied to the calibration of a tactile sensory map to improve the accuracy of directed movements of a robotic manipulator. This is demonstrated using a platform called Bellabot that incorporates an array of biomimetic tactile whiskers, actuated using electro-active polymer artificial muscles, a camera to provide visual error feedback, and a standard industrial robotic manipulator. The algorithm learns to accommodate imperfections in the sensory map that may be as a result of poor manufacturing tolerances or damage to the sensory array. Such an ability is an important pre-requisite for robust tactile robotic systems operating in the real-world for extended periods of time. In this work the sensory maps have been purposely distorted in order to evaluate the performance of the algorithm

    Perception of simple stimuli using sparse data from a tactile whisker array

    Get PDF
    We introduce a new multi-element sensory array built from tactile whiskers and modelled on the mammalian whisker sensory system. The new array adds, over previous designs, an actuated degree of freedom corresponding approximately to the mobility of the mystacial pad of the animal. We also report on its performance in a preliminary test of simultaneous identification and localisation of simple stimuli (spheres and a plane). The sensory processing system uses prior knowledge of the set of possible stimuli to generate percepts of the form and location of extensive stimuli from sparse and highly localised sensory data. Our results suggest that the additional degree of freedom has the potential to offer a benefit to perception accuracy for this type of sensor. © 2013 Springer-Verlag Berlin Heidelberg

    Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding

    Get PDF
    Recognising familiar places is a competence required in many engineering applications that interact with the real world such as robot navigation. Combining information from different sensory sources promotes robustness and accuracy of place recognition. However, mismatch in data registration, dimensionality, and timing between modalities remain challenging problems in multisensory place recognition. Spurious data generated by sensor drop-out in multisensory environments is particularly problematic and often resolved through adhoc and brittle solutions. An effective approach to these problems is demonstrated by animals as they gracefully move through the world. Therefore, we take a neuro-ethological approach by adopting self-supervised representation learning based on a neuroscientific model of visual cortex known as predictive coding. We demonstrate how this parsimonious network algorithm which is trained using a local learning rule can be extended to combine visual and tactile sensory cues from a biomimetic robot as it naturally explores a visually aliased environment. The place recognition performance obtained using joint latent representations generated by the network is significantly better than contemporary representation learning techniques. Further, we see evidence of improved robustness at place recognition in face of unimodal sensor drop-out. The proposed multimodal deep predictive coding algorithm presented is also linearly extensible to accommodate more than two sensory modalities, thereby providing an intriguing example of the value of neuro-biologically plausible representation learning for multimodal navigation
    corecore