6 research outputs found

    A versatile biomimetic controller for contact tooling and haptic exploration

    Get PDF
    International audienceThis article presents a versatile controller that enables various contact tooling tasks with minimal prior knowledge of the tooled surface. The controller is derived from results of neuroscience studies that investigated the neural mechanisms utilized by humans to control and learn complex interactions with the environment. We demonstrate here the versatility of this controller in simulations of cutting, drilling and surface exploration tasks, which would normally require different control paradigms. We also present results on the exploration of an unknown surface with a 7-DOF manipulator, where the robot builds a 3D surface map of the surface profile and texture while applying constant force during motion. Our controller provides a unified control framework encompassing behaviors expected from the different specialized control paradigms like position control, force control and impedance control

    Multimodal Bayesian Network for Artificial Perception

    Get PDF
    In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification

    Incrementally Learning Objects by Touch: Online Discriminative and Generative Models for Tactile-Based Recognition

    Get PDF

    Multimodal Bayesian Network for Artificial Perception

    Get PDF
    In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification. Document type: Part of book or chapter of boo

    Probabilistic representation of 3D object shape by in-hand exploration

    No full text
    This work presents a representation of 3D object shape using a probabilistic volumetric map derived from in-hand exploration. The exploratory procedure is based on contour following through the fingertip movements on the object surface. We first consider the simple case of having single hand exploration of a static object. The cumulative pose data provides a 3D point cloud that is quantized to the probabilistic volumetric map. For each voxel we have a probability distribution for the occupancy percentage. This is then extended to in-hand exploration of non-static objects. Since the object is moving during the in-hand exploration, and we also consider the use of the other hand for re-grasping, object pose has to be tracked. By keeping track of object motion we can register data to the initial pose to build a consistent object representation. An object centered representation is implemented using the computed object center of mass to define its frame of reference. Results are presented for in-hand exploration of both static and non-static objects that show that valid models can be obtained. The 3D object probabilistic representation can be used in several applications related with grasp generation tasks
    corecore