32,865 research outputs found

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    A study of event traffic during the shared manipulation of objects within a collaborative virtual environment

    Get PDF
    Event management must balance consistency and responsiveness above the requirements of shared object interaction within a Collaborative Virtual Environment (CVE) system. An understanding of the event traffic during collaborative tasks helps in the design of all aspects of a CVE system. The application, user activity, the display interface, and the network resources, all play a part in determining the characteristics of event management. Linked cubic displays lend themselves well to supporting natural social human communication between remote users. To allow users to communicate naturally and subconsciously, continuous and detailed tracking is necessary. This, however, is hard to balance with the real-time consistency constraints of general shared object interaction. This paper aims to explain these issues through a detailed examination of event traffic produced by a typical CVE, using both immersive and desktop displays, while supporting a variety of collaborative activities. We analyze event traffic during a highly collaborative task requiring various forms of shared object manipulation, including the concurrent manipulation of a shared object. Event sources are categorized and the influence of the form of object sharing as well as the display device interface are detailed. With the presented findings the paper wishes to aid the design of future systems

    Dexterous Manipulation Graphs

    Full text link
    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation

    Compound droplet manipulations on fiber arrays

    Full text link
    Recent works demonstrated that fiber arrays may constitue the basis of an open digital microfluidics. Various processes, such as droplet motion, fragmentation, trapping, release, mixing and encapsulation, may be achieved on fiber arrays. However, handling a large number of tiny droplets resulting from the mixing of several liquid components is still a challenge for developing microreactors, smart sensors or microemulsifying drugs. Here, we show that the manipulation of tiny droplets onto fiber networks allows for creating compound droplets with a high complexity level. Moreover, this cost-effective and flexible method may also be implemented with optical fibers in order to develop fluorescence-based biosensor

    Next generation space robot

    Get PDF
    The recent research effort on the next generation space robots is presented. The goals of this research are to develop the fundamental technologies and to acquire the design parameters of the next generation space robot. Visual sensing and perception, dexterous manipulation, man machine interface and artificial intelligence techniques such as task planning are identified as the key technologies

    Learning Dynamic Robot-to-Human Object Handover from Human Feedback

    Full text link
    Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics Research (ISRR) 201
    • …
    corecore