1,681 research outputs found

    Skinning a Robot: Design Methodologies for Large-Scale Robot Skin

    Get PDF
    Providing a robot with large-scale tactile sensing capabilities requires the use of design tools bridging the gap between user requirements and technical solutions. Given a set of functional requirements (e.g., minimum spatial sensitivity or minimum detectable force), two prerequisites must be considered: (i) the capability of the chosen tactile technology to satisfy these requirements from a technical standpoint; (ii) the ability of the customisation process to find a trade-off among different design parameters, such as (in case of robot skins based on the capacitive principle) dielectric thickness, diameter of sensing points, or weight. The contribution of this paper is two-fold: (i) the description of the possibilities offered by a design toolbox for large-scale robot skin based on Finite Element Analysis and optimisation principles, which provides a designer with insights and alternative choices to obtain a given tactile performance according to the scenario at hand; (ii) a discussion about the intrinsic limitations in simulating robot skin

    An integrated probabilistic framework for robot perception, learning and memory

    Get PDF
    Learning and perception from multiple sensory modalities are crucial processes for the development of intelligent systems capable of interacting with humans. We present an integrated probabilistic framework for perception, learning and memory in robotics. The core component of our framework is a computational Synthetic Autobiographical Memory model which uses Gaussian Processes as a foundation and mimics the functionalities of human memory. Our memory model, that operates via a principled Bayesian probabilistic framework, is capable of receiving and integrating data flows from multiple sensory modalities, which are combined to improve perception and understanding of the surrounding environment. To validate the model, we implemented our framework in the iCub humanoid robotic, which was able to learn and recognise human faces, arm movements and touch gestures through interaction with people. Results demonstrate the flexibility of our method to successfully integrate multiple sensory inputs, for accurate learning and recognition. Thus, our integrated probabilistic framework offers a promising core technology for robust intelligent systems, which are able to perceive, learn and interact with people and their environments

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201
    corecore