2,483 research outputs found

    Anticipation in Human-Robot Cooperation: A Recurrent Neural Network Approach for Multiple Action Sequences Prediction

    Full text link
    Close human-robot cooperation is a key enabler for new developments in advanced manufacturing and assistive applications. Close cooperation require robots that can predict human actions and intent, and understand human non-verbal cues. Recent approaches based on neural networks have led to encouraging results in the human action prediction problem both in continuous and discrete spaces. Our approach extends the research in this direction. Our contributions are three-fold. First, we validate the use of gaze and body pose cues as a means of predicting human action through a feature selection method. Next, we address two shortcomings of existing literature: predicting multiple and variable-length action sequences. This is achieved by introducing an encoder-decoder recurrent neural network topology in the discrete action prediction problem. In addition, we theoretically demonstrate the importance of predicting multiple action sequences as a means of estimating the stochastic reward in a human robot cooperation scenario. Finally, we show the ability to effectively train the prediction model on a action prediction dataset, involving human motion data, and explore the influence of the model's parameters on its performance. Source code repository: https://github.com/pschydlo/ActionAnticipationComment: IEEE International Conference on Robotics and Automation (ICRA) 2018, Accepte

    Investigating the Usability of Collaborative Robot control through Hands-Free Operation using Eye gaze and Augmented Reality

    Full text link
    This paper proposes a novel operation for controlling a mobile robot using a head-mounted device. Conventionally, robots are operated using computers or a joystick, which creates limitations in usability and flexibility because control equipment has to be carried by hand. This lack of flexibility may prevent workers from multitasking or carrying objects while operating the robot. To address this limitation, we propose a hands-free method to operate the mobile robot with a human gaze in an Augmented Reality (AR) environment. The proposed work is demonstrated using the HoloLens 2 to control the mobile robot, Robotnik Summit-XL, through the eye-gaze in AR. Stable speed control and navigation of the mobile robot were achieved through admittance control which was calculated using the gaze position. The experiment was conducted to compare the usability between the joystick and the proposed operation, and the results were validated through surveys (i.e., SUS, SEQ). The survey results from the participants after the experiments showed that the wearer of the HoloLens accurately operated the mobile robot in a collaborative manner. The results for both the joystick and the HoloLens were marked as easy to use with above-average usability. This suggests that the HoloLens can be used as a replacement for the joystick to allow hands-free robot operation and has the potential to increase the efficiency of human-robot collaboration in situations when hands-free controls are needed.Comment: Accepted for publication in the Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), 6 page

    Human Movement Direction Classification using Virtual Reality and Eye Tracking

    Get PDF
    Collaborative robots are becoming increasingly more popular in industries, providing flexibility and increased productivity for complex tasks. However, the robots are not yet that interactive since they cannot yet interpret humans and adapt to their behaviour, mainly due to limited sensory input. Rapidly expanding research fields that could make collaborative robots smarter through an understanding of the operators intentions are; virtual reality, eye tracking, big data, and artificial intelligence. Prediction of human movement intentions could be one way to improve these robots. This can be broken down into the three stages,\ua0Stage One:\ua0Movement Direction Classification,\ua0Stage Two:\ua0Movement Phase Classification,\ua0and\ua0Stage Three:\ua0Movement Intention Prediction.\ua0This paper defines these stages and presents a solution to\ua0Stage One\ua0that shows that it is possible to collect gaze data and use that to classify a person’s movement direction. The next step is naturally to develop the remaining two stages

    Human Movement Direction Prediction using Virtual Reality and Eye Tracking

    Get PDF
    One way of potentially improving the use of robots in a collaborative environment is through prediction of human intention that would give the robots insight into how the operators are about to behave. An important part of human behaviour is arm movement and this paper presents a method to predict arm movement based on the operator’s eye gaze. A test scenario has been designed in order to gather coordinate based hand movement data in a virtual reality environment. The results shows that the eye gaze data can successfully be used to train an artificial neural network that is able to predict the direction of movement ~500ms ahead of time

    How can human motion prediction increase transparency?

    Get PDF
    International audienceA major issue in the field of human-robot interaction for assistance to manipulation is transparency. This basic feature qualifies the capacity for a robot to follow human movements without any human-perceptible resistive forces. In this paper we address the issue of human motion prediction in order to increase the transparency of a robotic manipulator. Our aim is not to predict the motion itself, but to study how this prediction can be used to improve the robot transparency. For this purpose, we have designed a setup for performing basic planar manipulation tasks involving movements that are demanded to the subject and thus easily predictible. Moreover, we have developed a general controller which takes a predicted trajectory (recorded from offline free motion experiments) as an input and feeds the robot motors with a weighted sum of three controllers: torque feedforward, variable stiffness control and force feedback control. Subjects were then asked to perform the same task but with or without the robot assistance (which was not visible to the subject), and with several sets of gains for the controller tuning. First results seems to indicate that when a predictive controller with open loop torque feedforward is used, in conjunction with force-feeback control, the interaction forces are minimized. Therefore, the transparency is increased

    Intended Human Arm Movement Direction Prediction using Eye Tracking

    Get PDF
    Collaborative robots are becoming increasingly popular in industries, providing flexibility and increased productivity for complex tasks. However, the robots are still not interactive enough since they cannot yet interpret humans and adapt to their behaviour, mainly due to limited sensory input. Prediction of human movement intentions could be one way to improve these robots. This paper presents a system that uses a recurrent neural network to predict the intended human arm movement direction, solely based on eye gaze, utilizing the notion of uncertainty to determine whether to trust a prediction or not. The network was trained with eye tracking data gathered using a virtual reality environment. The presented deep learning solution makes predictions on continuously incoming data and reaches an accuracy of 70.7%, for predictions with high certainty, and correctly classifies 67.89% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 24% ahead of time in 75% of the cases. This means that a robot could receive warnings regarding in which direction an operator is likely to move and adjust its behaviour accordingly

    Context change and triggers for human intention recognition

    Get PDF
    In human-robot interaction, understanding human intention is important to smooth interaction between humans and robots. Proactive human-robot interactions are the trend. They rely on recognising human intentions to complete tasks. The reasoning is accomplished based on the current human state, environment and context, and human intention recognition and prediction. Many factors may affect human intention, including clues which are difficult to recognise directly from the action but may be perceived from the change in the environment or context. The changes that affect human intention are the triggers and serve as strong evidence for identifying human intention. Therefore, detecting such changes and identifying such triggers are the promising approach to assist in human intention recognition. This paper discusses the current state of art in human intention recognition in human-computer interaction and illustrates the importance of context change and triggers for human intention recognition in a variety of examples
    corecore