533 research outputs found

    Driver-automation indirect shared control of highly automated vehicles with intention-aware authority transition

    Get PDF
    Shared control is an important approach to avoid the driver-out-of-the-loop problems brought by imperfect autonomous driving. Steer-by-wire technology allows the mechanical decoupling between the steering wheel and the road wheels. On steer-by-wire vehicles, the automation can join the control loop by correcting the driver steering input, which forms a new paradigm of shared control. The new framework, under which the driver indirectly controls the vehicle through the automation’s input transformation, is called indirect shared control. This paper presents an indirect shared control system, which realizes the dynamic control authority allocation with respect to the driver’s authority intention. The simulation results demonstrate the effectiveness and benefits of the proposed control authority adaptation method

    Force, impedance and trajectory learning for contact tooling and haptic identification

    Get PDF
    Humans can skilfully use tools and interact with the environment by adapting their movement trajectory, contact force, and impedance. Motivated by the human versatility, we develop here a robot controller that concurrently adapts feedforward force, impedance, and reference trajectory when interacting with an unknown environment. In particular, the robot's reference trajectory is adapted to limit the interaction force and maintain it at a desired level, while feedforward force and impedance adaptation compensates for the interaction with the environment. An analysis of the interaction dynamics using Lyapunov theory yields the conditions for convergence of the closed-loop interaction mediated by this controller. Simulations exhibit adaptive properties similar to human motor adaptation. The implementation of this controller for typical interaction tasks including drilling, cutting, and haptic exploration shows that this controller can outperform conventional controllers in contact tooling

    Push to know! -- Visuo-Tactile based Active Object Parameter Inference with Dual Differentiable Filtering

    Full text link
    For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging problem, using either vision or tactile sensing. In this work, we propose a novel framework to estimate key object parameters using non-prehensile manipulation using vision and tactile sensing. Our proposed active dual differentiable filtering (ADDF) approach as part of our framework learns the object-robot interaction during non-prehensile object push to infer the object's parameters. Our proposed method enables the robotic system to employ vision and tactile information to interactively explore a novel object via non-prehensile object push. The novel proposed N-step active formulation within the differentiable filtering facilitates efficient learning of the object-robot interaction model and during inference by selecting the next best exploratory push actions (where to push? and how to push?). We extensively evaluated our framework in simulation and real-robotic scenarios, yielding superior performance to the state-of-the-art baseline.Comment: 8 pages. Accepted at IROS 202

    Visuo-Tactile based Predictive Cross Modal Perception for Object Exploration in Robotics

    Get PDF
    Autonomously exploring the unknown physical properties of novel objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. We introduce a novel visuo-tactile based predictive cross-modal perception framework where initial visual observations (shape) aid in obtaining an initial prior over the object properties (mass). The initial prior improves the efficiency of the object property estimation, which is autonomously inferred via interactive non-prehensile pushing and using a dual filtering approach. The inferred properties are then used to enhance the predictive capability of the cross-modal function efficiently by using a human-inspired `surprise' formulation. We evaluated our proposed framework in the real-robotic scenario, demonstrating superior performance

    A Framework to Describe, Analyze and Generate Interactive Motor Behaviors

    Get PDF
    International audienceWhile motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks

    Predictive Visuo-Tactile Interactive Perception Framework for Object Properties Inference

    Get PDF
    Interactive exploration of unknown objects' properties, such as stiffness, mass, center of mass, friction coefficient, and shape, is crucial for autonomous robotic systems operating in unstructured environments. Precise identification of these properties is essential for stable and controlled object manipulation and for anticipating the outcomes of (prehensile or nonprehensile) manipulation actions, such as pushing, pulling, and lifting. Our study focuses on autonomously inferring the physical properties of a diverse set of homogeneous, heterogeneous, and articulated objects using a robotic system equipped with vision and tactile sensors. We propose a novel predictive perception framework to identify object properties by leveraging versatile exploratory actions: nonprehensile pushing and prehensile pulling. A key component of our framework is a novel active shape perception mechanism that seamlessly initiates exploration. In addition, our dual differentiable filtering with graph neural networks learns the object-robot interaction and enables consistent inference of indirectly observable, time-invariant object properties. Finally, we develop a N-step information gain approach to select the most informative actions for efficient learning and inference. Extensive real-robot experiments with planar objects show that our predictive perception framework outperforms state-of-the-art baselines and showcases it in three major applications for object tracking, goal-driven task, and environmental change detection.</p

    Visuo-Tactile Based Predictive Cross Modal Perception for Object Exploration in Robotics

    Get PDF
    Autonomously exploring the unknown physical properties of novel objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. We introduce a novel visuo-tactile based predictive cross-modal perception framework where initial visual observations (shape) aid in obtaining an initial prior over the object properties (mass). The initial prior improves the efficiency of the object property estimation, which is autonomously inferred via interactive non-prehensile pushing and using a dual filtering approach. The inferred properties are then used to enhance the predictive capability of the cross-modal function efficiently by using a human-inspired ‘surprise’ formulation. We evaluated our proposed framework in the real-robotic scenario, demonstrating superior performance
    corecore