32 research outputs found

    Active sensorimotor control for tactile exploration

    Get PDF
    In this paper, we present a novel and robust Bayesian approach for autonomous active exploration of unknown objects using tactile perception and sensorimotor control. Despite recent advances in tactile sensing, robust active exploration remains a challenging problem, which is a major hurdle to the practical deployment of tactile sensors in robots. Our proposed approach is based on a Bayesian perception method that actively controls the sensor with local small repositioning movements to reduce perception uncertainty, followed by explorative movements based on the outcome of each perceptual decision making step. Two sensorimotor control strategies are proposed for improving the accuracy and speed of the active exploration that weight the evidence from previous exploratory steps through either a weighted prior or weighted posterior. The methods are validated both off-line and in real-time on a contour following exploratory procedure. Results clearly demonstrate improvements in both accuracy and exploration time when using the proposed active methods compared to passive perception. Our work demonstrates that active perception has the potential to enable robots to perform robust autonomous tactile exploration in natural environments

    Shear-invariant Sliding Contact Perception with a Soft Tactile Sensor

    Full text link
    Manipulation tasks often require robots to be continuously in contact with an object. Therefore tactile perception systems need to handle continuous contact data. Shear deformation causes the tactile sensor to output path-dependent readings in contrast to discrete contact readings. As such, in some continuous-contact tasks, sliding can be regarded as a disturbance over the sensor signal. Here we present a shear-invariant perception method based on principal component analysis (PCA) which outputs the required information about the environment despite sliding motion. A compliant tactile sensor (the TacTip) is used to investigate continuous tactile contact. First, we evaluate the method offline using test data collected whilst the sensor slides over an edge. Then, the method is used within a contour-following task applied to 6 objects with varying curvatures; all contours are successfully traced. The method demonstrates generalisation capabilities and could underlie a more sophisticated controller for challenging manipulation or exploration tasks in unstructured environments. A video showing the work described in the paper can be found at https://youtu.be/wrTM61-pieUComment: Accepted in ICRA 201

    An ensemble data-driven fuzzy network for laser welding quality prediction

    Get PDF
    This paper presents an Ensemble Data-Driven Fuzzy Network (EDDFN) for laser welding quality prediction that is composed of a number of strategically selected Data-Driven Fuzzy Models (DDFMs). Each model is trained by an Adaptive Negative Correlation Learning approach (ANCL). A monitoring system provides quality-relevant information of the laser beam spectrum and the geometry of the melt pool. This information is used by the proposed ensemble model to asist in the prediction of the welding quality. Each DDFM is based on three conceptual components, i.e. a selection procedure of the most representative welding information, a granular comprehesion process of data and the construction of a fuzzy reasoning mechanism as a series of Radial Basis Function Neural Networks (RBF-NNs). The proposed model aims at providing a fuzzy reasoning engine that is able to preserve a good balance between transparency and accuracy while improving its prediction properties. We apply the EDDFN to a real case study in manufacturing industry for the prediction of welding quality. The corresponding results confirm that the EDDFN provides better prediction properties compared to a single DDFM with an overal prediction performance > 78%

    Adaptive perception: learning from sensory predictions to extract object shape with a biomimetic fingertip

    Get PDF
    In this work, we present an adaptive perception method to improve the performance in accuracy and speed of a tactile exploration task. This work extends our previous studies on sensorimotor control strategies for active tactile perception in robotics. First, we present the active Bayesian perception method to actively reposition a robot to accumulate evidence from better locations to reduce uncertainty. Second, we describe the adaptive perception method that, based on a forward model and a predicted information gain approach, allows to the robot to analyse `what would have happened' if a different decision `would have been made' at previous decision time. This approach permits to adapt the active Bayesian perception process to improve the performance in accuracy and reaction time of an exploration task. Our methods are validated with a contour following exploratory procedure with a touch sensor. The results show that the adaptive perception method allows the robot to make sensory predictions and autonomously adapt, improving the performance of the exploration task

    Prediction of gait events in walking activities with a Bayesian perception system

    Get PDF
    In this paper, a robust probabilistic formulation for prediction of gait events from human walking activities using wearable sensors is presented. This approach combines the output from a Bayesian perception system with observations from actions and decisions made over time. The perception system makes decisions about the current gait events, while observations from decisions and actions allow to predict the most probable gait event during walking activities. Furthermore, our proposed method is capable to evaluate the accuracy of its predictions, which permits to obtain a better performance and trade-off between accuracy and speed. In our work, we use data from wearable inertial measurement sensors attached to the thigh, shank and foot of human participants. The proposed perception system is validated with multiple experiments for recognition and prediction of gait events using angular velocity data from three walking activities; level-ground, ramp ascent and ramp descent. The results show that our method is fast, accurate and capable to evaluate and adapt its own performance. Overall, our Bayesian perception system demonstrates to be a suitable high-level method for the development of reliable and intelligent assistive and rehabilitation robots

    An integrated probabilistic framework for robot perception, learning and memory

    Get PDF
    Learning and perception from multiple sensory modalities are crucial processes for the development of intelligent systems capable of interacting with humans. We present an integrated probabilistic framework for perception, learning and memory in robotics. The core component of our framework is a computational Synthetic Autobiographical Memory model which uses Gaussian Processes as a foundation and mimics the functionalities of human memory. Our memory model, that operates via a principled Bayesian probabilistic framework, is capable of receiving and integrating data flows from multiple sensory modalities, which are combined to improve perception and understanding of the surrounding environment. To validate the model, we implemented our framework in the iCub humanoid robotic, which was able to learn and recognise human faces, arm movements and touch gestures through interaction with people. Results demonstrate the flexibility of our method to successfully integrate multiple sensory inputs, for accurate learning and recognition. Thus, our integrated probabilistic framework offers a promising core technology for robust intelligent systems, which are able to perceive, learn and interact with people and their environments

    A combined Adaptive Neuro-Fuzzy and Bayesian strategy for recognition and prediction of gait events using wearable sensors

    Get PDF
    A robust strategy for recognition and prediction of gait events using wearable sensors is presented in this paper. The strategy adopted here uses a combination of two computational intelligence approaches: Adaptive Neuro-Fuzzy and Bayesian methods. Recognition of gait events is performed by a Bayesian method which iteratively accumulates evidence to reduce uncertainty from sensor measurements. Prediction of gait events is based on the observation of decisions and actions made over time by our perception system. An Adaptive Neuro-Fuzzy system evaluates the reliability of predictions, learns a weighting parameter and controls the amount of predicted information to be used by our Bayesian method. Thus, this strategy ensures the achievement of better recognition and prediction performance in both accuracy and speed. The methods are validated with experiments for recognition and prediction of gait events with different walking activities, using data from wearable sensors attached to lower limbs of participants. Overall, results show the benefits of our combined Adaptive Neuro-Fuzzy and Bayesian strategy to achieve fast and accurate decisions, but also to evaluate and adapt its own performance, making it suitable for the development of intelligent assistive and rehabilitation robots

    Hierarchical Behaviour for Object Shape Recognition Using a Swarm of Robots

    Get PDF
    A hierarchical cognitive architecture for robot exploration and recognition of object shape is presented. This cognitive architecture proposes the combination of multiple robot behaviours based on (1) Evolutionary, (2) Fuzzy Logic and (3) Bayesian approaches. First, the Evolutionary approach allows a swarm of robots to locate and reach an object for exploration. Second, Fuzzy Logic is used to control the exploration of the object shape. Third, the Bayesian approach allows the robot to detect the orientation of the walls of the object being explored. Once the exploration process finishes, the swarm of robots determine whether the object has a rectangular or circular shape. This work is validated in a simulated environment and MATLAB using a swarm of E-puck robots. Overall, the experiments demonstrate that simple robots are capable of performing complex tasks through the combination of simple collective behaviours while learning from the interaction with the environment.</p

    TANDEM3D: Active Tactile Exploration for 3D Object Recognition

    Full text link
    Tactile recognition of 3D objects remains a challenging task. Compared to 2D shapes, the complex geometry of 3D surfaces requires richer tactile signals, more dexterous actions, and more advanced encoding techniques. In this work, we propose TANDEM3D, a method that applies a co-training framework for exploration and decision making to 3D object recognition with tactile signals. Starting with our previous work, which introduced a co-training paradigm for 2D recognition problems, we introduce a number of advances that enable us to scale up to 3D. TANDEM3D is based on a novel encoder that builds 3D object representation from contact positions and normals using PointNet++. Furthermore, by enabling 6DOF movement, TANDEM3D explores and collects discriminative touch information with high efficiency. Our method is trained entirely in simulation and validated with real-world experiments. Compared to state-of-the-art baselines, TANDEM3D achieves higher accuracy and a lower number of actions in recognizing 3D objects and is also shown to be more robust to different types and amounts of sensor noise. Video is available at https://jxu.ai/tandem3d
    corecore