269 research outputs found

    Push to know! -- Visuo-Tactile based Active Object Parameter Inference with Dual Differentiable Filtering

    Full text link
    For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging problem, using either vision or tactile sensing. In this work, we propose a novel framework to estimate key object parameters using non-prehensile manipulation using vision and tactile sensing. Our proposed active dual differentiable filtering (ADDF) approach as part of our framework learns the object-robot interaction during non-prehensile object push to infer the object's parameters. Our proposed method enables the robotic system to employ vision and tactile information to interactively explore a novel object via non-prehensile object push. The novel proposed N-step active formulation within the differentiable filtering facilitates efficient learning of the object-robot interaction model and during inference by selecting the next best exploratory push actions (where to push? and how to push?). We extensively evaluated our framework in simulation and real-robotic scenarios, yielding superior performance to the state-of-the-art baseline.Comment: 8 pages. Accepted at IROS 202

    A Framework to Describe, Analyze and Generate Interactive Motor Behaviors

    Get PDF
    International audienceWhile motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks

    Open-Set Object Recognition Using Mechanical Properties During Interaction

    Full text link
    while most of the tactile robots are operated in close-set conditions, it is challenging for them to operate in open-set conditions where test objects are beyond the robots' knowledge. We proposed an open-set recognition framework using mechanical properties to recongise known objects and incrementally label novel objects. The main contribution is a clustering algorithm that exploits knowledge of known objects to estimate cluster centre and sizes, unlike a typical algorithm that randomly selects them. The framework is validated with the mechanical properties estimated from a real object during interaction. The results show that the framework could recognise objects better than alternative methods contributed by the novelty detector. Importantly, our clustering algorithm yields better clustering performance than other methods. Furthermore, the hyperparameters studies show that cluster size is important to clustering results and needed to be tuned properly

    Facing the partner influences exchanges in force

    Get PDF
    Takagi A, Bagnato C, Burdet E. Facing the partner influences exchanges in force. Scientific Reports. 2016;6(1): 35397

    Bimanual coordination during a physically coupled task in unilateral spastic cerebral palsy children

    Get PDF
    Mutalib SA, Mace M, Burdet E. Bimanual coordination during a physically coupled task in unilateral spastic cerebral palsy children. Journal of neuroengineering and rehabilitation. 2019;16(1): 1

    Coupled pairs do not necessarily interact

    Get PDF
    Previous studies that examined paired sensorimotor interaction suggested that rigidly coupled partners negotiate roles through the coupling force [1-3]. As a result, several human-robot interaction strategies have been developed with such explicit role distribution [4-6]. However, the evidence for role formation in human pairs is missing; to understand how rigidly coupled pairs negotiate roles through the coupling, we systematically examined rigidly coupled pairs who made point-to-point reaching movements. Our results reveal the consistency of the coupling force during the movement, from the very beginning of interaction. Do partners somehow negotiate the roles prior to interaction? A more likely explanation is that the coupling force is a by-product of two people who independently planned their reaching movements. We developed a computational model of two independent motion planners, which explains inter-pair coupling force variability. We demonstrate that the coupling force alone is an unreliable measure of interaction, and that coupled reaching is not a suitable task to examine sensorimotor interaction between humans. [1] Reed KB, Peshkin M (2008), IEEE Trans Haptics 1: 108-20. [2] Stefanov N, Peer A, Buss M (2009), Proc Worldhaptics 51-6. [3] van der Wel RPRD, Knoblich G & Sebanz N (2011), J Exp Psychol 37: 1420-31. [4] Evrard P, Kheddar A (2009), Proc Worldhaptics 45-50. [5] Oguz S, Kucukyilmaz A, Sezgin T, Basdogan C (2010), Proc Worldhaptics 371-8. [6] Mörtl A, Lawitzky M, Kucukyilmaz A, Sezgin M, Basdogan C, Kirche S (2012), Int J of Robotics Research 31(13): 1656-74

    Integrated canopy, building energy and radiosity model for 3D urban design

    Full text link
    We present an integrated, three dimensional, model of urban canopy, building energy and radiosity, for early stage urban designs and test it on four urban morphologies. All sub-models share a common descriptions of the urban morphology, similar to 3D urban design master plans and have simple parameters. The canopy model is a multilayer model, with a new discrete layer approach that does not rely on simplified geometry such as canyon or regular arrays. The building energy model is a simplified RC equivalent model, with no hypotheses on internal zoning or wall composition. We use the CitySim software for the radiosity model. We study the effects of convexity, the number of buildings and building height, at constant density and thermal characteristics. Our results suggest that careful three dimensional morphology design can reduce heat demand by a factor of 2, especially by improving insolation of lower levels. The most energy efficient morphology in our simulations has both the highest surface/volume ratio and the biggest impact on the urban climate
    corecore