36,152 research outputs found

    Human Arm simulation for interactive constrained environment design

    Get PDF
    During the conceptual and prototype design stage of an industrial product, it is crucial to take assembly/disassembly and maintenance operations in advance. A well-designed system should enable relatively easy access of operating manipulators in the constrained environment and reduce musculoskeletal disorder risks for those manual handling operations. Trajectory planning comes up as an important issue for those assembly and maintenance operations under a constrained environment, since it determines the accessibility and the other ergonomics issues, such as muscle effort and its related fatigue. In this paper, a customer-oriented interactive approach is proposed to partially solve ergonomic related issues encountered during the design stage under a constrained system for the operator's convenience. Based on a single objective optimization method, trajectory planning for different operators could be generated automatically. Meanwhile, a motion capture based method assists the operator to guide the trajectory planning interactively when either a local minimum is encountered within the single objective optimization or the operator prefers guiding the virtual human manually. Besides that, a physical engine is integrated into this approach to provide physically realistic simulation in real time manner, so that collision free path and related dynamic information could be computed to determine further muscle fatigue and accessibility of a product designComment: International Journal on Interactive Design and Manufacturing (IJIDeM) (2012) 1-12. arXiv admin note: substantial text overlap with arXiv:1012.432

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset

    The Whole World in Your Hand: Active and Interactive Segmentation

    Get PDF
    Object segmentation is a fundamental problem in computer vision and a powerful resource for development. This paper presents three embodied approaches to the visual segmentation of objects. Each approach to segmentation is aided by the presence of a hand or arm in the proximity of the object to be segmented. The first approach is suitable for a robotic system, where the robot can use its arm to evoke object motion. The second method operates on a wearable system, viewing the world from a human's perspective, with instrumentation to help detect and segment objects that are held in the wearer's hand. The third method operates when observing a human teacher, locating periodic motion (finger/arm/object waving or tapping) and using it as a seed for segmentation. We show that object segmentation can serve as a key resource for development by demonstrating methods that exploit high-quality object segmentations to develop both low-level vision capabilities (specialized feature detectors) and high-level vision capabilities (object recognition and localization)

    Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces

    Full text link
    To enable safe and efficient human-robot collaboration in shared workspaces it is important for the robot to predict how a human will move when performing a task. While predicting human motion for tasks not known a priori is very challenging, we argue that single-arm reaching motions for known tasks in collaborative settings (which are especially relevant for manufacturing) are indeed predictable. Two hypotheses underlie our approach for predicting such motions: First, that the trajectory the human performs is optimal with respect to an unknown cost function, and second, that human adaptation to their partner's motion can be captured well through iterative re-planning with the above cost function. The key to our approach is thus to learn a cost function which "explains" the motion of the human. To do this, we gather example trajectories from pairs of participants performing a collaborative assembly task using motion capture. We then use Inverse Optimal Control to learn a cost function from these trajectories. Finally, we predict reaching motions from the human's current configuration to a task-space goal region by iteratively re-planning a trajectory using the learned cost function. Our planning algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF human kinematic model and accounts for the presence of a moving collaborator and obstacles in the environment. Our results suggest that in most cases, our method outperforms baseline methods when predicting motions. We also show that our method outperforms baselines for predicting human motion when a human and a robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201
    • …
    corecore