654 research outputs found

    A Certified-Complete Bimanual Manipulation Planner

    Full text link
    Planning motions for two robot arms to move an object collaboratively is a difficult problem, mainly because of the closed-chain constraint, which arises whenever two robot hands simultaneously grasp a single rigid object. In this paper, we propose a manipulation planning algorithm to bring an object from an initial stable placement (position and orientation of the object on the support surface) towards a goal stable placement. The key specificity of our algorithm is that it is certified-complete: for a given object and a given environment, we provide a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. Moreover, the certificate is constructive: at run-time, it can be used to quickly find a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture.Comment: 12 pages, 7 figures, 1 tabl

    Bimanual prehension to a solitary target

    Get PDF
    Grasping and functionally interacting with a relatively large or awkwardly shaped object requires the independent and cooperative coordination of both limbs. Acknowledging the vital role of visual information in successfully executing any prehensile movements, the present study aimed to clarify how well existing bimanual coordination models (Kelso et al, 1979; Marteniuk & Mackenzie, 1980) can account for bimanual prehension movements targeting a single end-point under varying visual conditions. We therefore, employed two experiments in which vision of the target object and limbs was available or unavailable during a bimanual movement in order to determine the affects of visual or memory-guided control (e.g. feedback vs. feed forward) on limb coordination.Ten right-handed participants (mean age = 24.5) performed a specific bimanual prehension movement targeting a solitary, static object under both visual closed loop (CL) and open loop 2s delay (OL2) conditions. Target location was varied while target amplitude remained constant. Kinematic data (bimanual coupling variables) indicated that regardless of target location, participants employed one of two highly successful movement execution strategies depending on visual feedback availability. During visual (CL) conditions participants employed a ‘dominant-hand initiation’ strategy characterized by a significantly faster right-hand (RH) reaction time and simultaneous hand contact with the target. In contrast, when no visual feedback was available (OL2), participants utilized a ‘search and follow’ strategy characterized by limb coupling at movement onset and a reliance on the dominant RH to contact the target ~62 ms before the left.In conclusion, the common goal parameters of targeting a single object with both hands are maintained and successfully achieved regardless of visual condition. Furthermore, independent programming of each limb is undeniably evident within the behaviours observed providing support for the neural cross-talk theory of bimanual coordination (Marteniuk & Mackenzie, 1980). Whether movement execution is visually (CL) or memory-guided (OL2) there is a clear preference of RH utilization possibly due to its dynamic and/or hemispheric advantages in controlling complex motor behaviours (Gonzalez et al., 2006). Therefore, we propose that bimanual grasping to a solitary target is possibly governed globally by a higher-level structure and successful execution is achieved via independent spinal pathway modulation of limbs

    Stabilize to Act: Learning to Coordinate for Bimanual Manipulation

    Full text link
    Key to rich, dexterous manipulation in the real world is the ability to coordinate control across two hands. However, while the promise afforded by bimanual robotic systems is immense, constructing control policies for dual arm autonomous systems brings inherent difficulties. One such difficulty is the high-dimensionality of the bimanual action space, which adds complexity to both model-based and data-driven methods. We counteract this challenge by drawing inspiration from humans to propose a novel role assignment framework: a stabilizing arm holds an object in place to simplify the environment while an acting arm executes the task. We instantiate this framework with BimanUal Dexterity from Stabilization (BUDS), which uses a learned restabilizing classifier to alternate between updating a learned stabilization position to keep the environment unchanged, and accomplishing the task with an acting policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of varying complexities on real-world robots, such as zipping jackets and cutting vegetables. Given only 20 demonstrations, BUDS achieves 76.9% task success across our task suite, and generalizes to out-of-distribution objects within a class with a 52.7% success rate. BUDS is 56.0% more successful than an unstructured baseline that instead learns a BC stabilizing policy due to the precision required of these complex tasks. Supplementary material and videos can be found at https://sites.google.com/view/stabilizetoact .Comment: Conference on Robot Learning, 202

    Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control Retargetting Human Commands to Feasible Robot Control References

    Get PDF
    This article presents a robust and reliable human–robot collaboration (HRC) framework for bimanual manipulation. We propose an optimal motion adaptation method to retarget arbitrary human commands to feasible robot pose references while maintaining payload stability. The framework comprises three modules: 1) a task-space sequential equilibrium and inverse kinematics optimization ( task-space SEIKO ) for retargeting human commands and enforcing feasibility constraints, 2) an admittance controller to facilitate compliant human–robot physical interactions, and 3) a low-level controller improving stability during physical interactions. Experimental results show that the proposed framework successfully adapted infeasible and dangerous human commands into continuous motions within safe boundaries and achieved stable grasping and maneuvering of large and heavy objects on a real dual-arm robot via teleoperation and physical interaction. Furthermore, the framework demonstrated the capability in the assembly task of building blocks and the insertion task of industrial power connectors

    Using movement kinematics to understand the motor side of Autism Spectrum Disorder

    Get PDF
    openComprensione del sintomo motorio dell'autismo attraverso la cinematica del movimentoBeside core deficits in social interaction and communication, atypical motor patterns have been often reported in people with Autism Spectrum Disorder (ASD). It has been recently speculated that a part of these sensorimotor abnormalities could be better explained considering prospective motor control (i.e., the ability to plan actions toward future events or consider future task demands), which has been hypothesized to be crucial for higher mind functions (e.g., understand intentions of other people) (Trevarthen and Delafield-Butt 2013). The aim of the current dissertation was to tackle the motor ‘side’ in ASD exploring whether and how prospective motor control might be atypical in children with a diagnosis of autism, given that actions are directed into the future and their control is based on knowledge of what is going to happen next (von Hofsten and Rosander 2012). To do this, an integrative approach based on neuropsychological assessment, behavioural paradigms and machine learning modelling of the kinematics recorded with motion capture techniques was applied in typically developing children and children with ASD without accompanying intellectual impairment.openXXXI CICLO - ARCHITETTURA E DESIGN - Design navale e nauticoBECCHIO, CRISTINA (IIT)Podda, Jessic

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Trying to Grasp a Sketch of a Brain for Grasping

    Get PDF
    Ritter H, Haschke R, Steil JJ. Trying to Grasp a Sketch of a Brain for Grasping. In: Sendhoff B, ed. Creating Brain-Like Intelligence. Lecture Notes in Artificial Intelligence; 5436. Berlin, Heidelberg: Springer; 2009: 84-102

    An evaluation of asymmetric interfaces for bimanual virtual assembly with haptics

    Get PDF
    Immersive computing technology provides a human–computer interface to support natural human interaction with digital data and models. One application for this technology is product assembly methods planning and validation. This paper presents the results of a user study which explores the effectiveness of various bimanual interaction device configurations for virtual assembly tasks. Participants completed two assembly tasks with two device configurations in five randomized bimanual treatment conditions (within subjects). A Phantom Omni® with and without haptics enabled and a 5DT Data Glove were used. Participant performance, as measured by time to assemble, was the evaluation metric. The results revealed that there was no significant difference in performance between the five treatment conditions. However, half of the participants chose the 5DT Data Glove and the haptic-enabled Phantom Omni® as their preferred device configuration. In addition, qualitative comments support both the preference of haptics during the assembly process and comments confirming Guiard’s kinematic chain model

    Gamification of assembly planning in virtual environment

    Get PDF
    Purpose: The purpose of this paper is to study the effect of the gamification of virtual assembly planning on the user performance, user experience and engagement. / Design/methodology/approach: A multi-touch table was used to manipulate virtual parts and gamification features were integrated into the virtual assembly environment. An experiment was conducted in two conditions: a gamified and a non-gamified virtual environment. Subjects had to assemble a virtual pump. The user performance was evaluated in terms of the number of errors, the feasibility of the generated assembly sequence and the user feedback. / Findings: The gamification reduced the number of errors and increased the score representing the number of right decisions. The results of the subjective and objective analysis showed that the number of errors decreased with engagement in the gamified assembly. The increase in the overall user experience reduced the number of errors. The subjective evaluation showed a significant difference between the gamified and the non-gamified assembly in terms of the level of engagement, the learning usability and the overall experience. / Research limitations/implications: The effective learning retention after training has not been tested, and longitudinal studies are necessary. The effect of the used gamification elements has been evaluated as a whole; further work could isolate the most beneficial features and add other elements that might be more beneficial for learning. / Originality/value: The research reported in this paper provides valuable insights into the gamification of virtual assembly using a low-cost multi-touch interface. The results are promising for training operators to assemble a product at the design stage
    corecore