89 research outputs found

    Human Movement Direction Classification using Virtual Reality and Eye Tracking

    Get PDF
    Collaborative robots are becoming increasingly more popular in industries, providing flexibility and increased productivity for complex tasks. However, the robots are not yet that interactive since they cannot yet interpret humans and adapt to their behaviour, mainly due to limited sensory input. Rapidly expanding research fields that could make collaborative robots smarter through an understanding of the operators intentions are; virtual reality, eye tracking, big data, and artificial intelligence. Prediction of human movement intentions could be one way to improve these robots. This can be broken down into the three stages,\ua0Stage One:\ua0Movement Direction Classification,\ua0Stage Two:\ua0Movement Phase Classification,\ua0and\ua0Stage Three:\ua0Movement Intention Prediction.\ua0This paper defines these stages and presents a solution to\ua0Stage One\ua0that shows that it is possible to collect gaze data and use that to classify a person’s movement direction. The next step is naturally to develop the remaining two stages

    Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction

    Get PDF
    Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of human movement intentions is one way to improve the robots adaptation. This paper investigates the performance of using Transformers and MLP-Mixer based neural networks to predict the intended human arm movement direction, based on gaze data obtained in a virtual reality environment, and compares the results to using an LSTM network. The comparison will evaluate the networks based on accuracy on several metrics, time ahead of movement completion, and execution time. It is shown in the paper that there exists several network configurations and architectures that achieve comparable accuracy scores. The best performing Transformers encoder presented in this paper achieved an accuracy of 82.74%, for predictions with high certainty, on continuous data and correctly classifies 80.06% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 19% ahead of movement completion in 75% of the cases. The results shows that there are multiple ways to utilize neural networks to perform gaze based arm movement intention prediction and it is a promising step toward enabling efficient human-robot collaboration

    A standardization approach to Virtual Commissioning strategies in complex production environments

    Get PDF
    The ongoing industrial revolution puts high demands on the component manufacturers and suppliers to meet the tough requirements set by the development industries to follow the technological advancement of highly digitalized factories with more future-oriented applications as Virtual Commissioning for cyber-physical systems. This paper provides a production system lifecycle assessment regarding the technical specification strategies using Virtual Commissioning for implementation and integration of new systems or plants and its predicted future challenges. With the use of standards and a common language practice between a purchaser/contractor procurement situation and across the different technical disciplines internally and externally, the implementation strategies is reiterated to achieve a new sustainable business model. The paper investigates different types of production systems and how a defined classification framework of different levels of Virtual Commissioning can connect the implementation requirements to a desired solution. This strategy includes aspects of standardization, communication, process lifecycle, and predicted cost parameters

    Human Movement Direction Prediction using Virtual Reality and Eye Tracking

    Get PDF
    One way of potentially improving the use of robots in a collaborative environment is through prediction of human intention that would give the robots insight into how the operators are about to behave. An important part of human behaviour is arm movement and this paper presents a method to predict arm movement based on the operator’s eye gaze. A test scenario has been designed in order to gather coordinate based hand movement data in a virtual reality environment. The results shows that the eye gaze data can successfully be used to train an artificial neural network that is able to predict the direction of movement ~500ms ahead of time

    Intended Human Arm Movement Direction Prediction using Eye Tracking

    Get PDF
    Collaborative robots are becoming increasingly popular in industries, providing flexibility and increased productivity for complex tasks. However, the robots are still not interactive enough since they cannot yet interpret humans and adapt to their behaviour, mainly due to limited sensory input. Prediction of human movement intentions could be one way to improve these robots. This paper presents a system that uses a recurrent neural network to predict the intended human arm movement direction, solely based on eye gaze, utilizing the notion of uncertainty to determine whether to trust a prediction or not. The network was trained with eye tracking data gathered using a virtual reality environment. The presented deep learning solution makes predictions on continuously incoming data and reaches an accuracy of 70.7%, for predictions with high certainty, and correctly classifies 67.89% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 24% ahead of time in 75% of the cases. This means that a robot could receive warnings regarding in which direction an operator is likely to move and adjust its behaviour accordingly

    Event-driven industrial robot control architecture for the Adept V+ platform

    Get PDF
    Modern industrial robotic systems are highly interconnected. They operate in a distributed environment and communicate with sensors, computer vision systems, mechatronic devices, and computational components. On the fundamental level, communication and coordination between all parties in such distributed system are characterized by discrete event behavior. The latter is largely attributed to the specifics of communication over the network, which, in terms, facilitates asynchronous programming and explicit event handling. In addition, on the conceptual level, events are an important building block for realizing reactivity and coordination. Eventdriven architecture has manifested its effectiveness for building loosely-coupled systems based on publish-subscribe middleware, either general-purpose or robotic-oriented. Despite all the advances in middleware, industrial robots remain difficult to program in context of distributed systems, to a large extent due to the limitation of the native robot platforms. This paper proposes an architecture for flexible event-based control of industrial robots based on the Adept V+ platform. The architecture is based on the robot controller providing a TCP/IP server and a collection of robot skills, and a high-level control module deployed to a dedicated computing device. The control module possesses bidirectional communication with the robot controller and publish/subscribe messaging with external systems. It is programmed in asynchronous style using pyadept, a Python library based on Python coroutines, AsyncIO event loop and ZeroMQ middleware. The proposed solution facilitates integration of Adept robots into distributed environments and building more flexible robotic solutions with eventbased logic

    Automatic generation: A way of ensuring PLC and HMI standards

    Get PDF
    Preparing an automatic production system takes a lot of time and to be able to decrease this time virtual simulation studies are used more and more frequently. However, even if more work is performed in a virtual environment a problem is still that the same work is done more than one time in different software tools due to the lack of integration between them. The present paper presents a case study that investigates how a newly developed tool called SIMATIC Automation Designer can be used in order to close the gap between the mechanical design and the electrical design. SIMATIC Automation Designer is a Siemens software that can generate PLC code and HMI screens. The result shows that by generating PLC code and HMI screens automatically, it is possible to get the same structure and naming standard in every PLC and HMI project. This will ensure a corporate standard and will be a quality assurance of the PLC code and HMI screens

    A ROS2 based communication architecture for control in collaborative and intelligent automation systems

    Get PDF
    Collaborative robots are becoming part of intelligent automation systems in modern industry. Development and control of such systems differs from traditional automation methods and consequently leads to new challenges. Thankfully, Robot Operating System (ROS) provides a communication platform and a vast variety of tools and utilities that can aid that development. However, it is hard to use ROS in large-scale automation systems due to communication issues in a distributed setup, hence the development of ROS2. In this paper, a ROS2 based communication architecture is presented together with an industrial use-case of a collaborative and intelligent automation system.Comment: 9 pages, 4 figures, 3 tables, to be published in the proceedings of 29th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM2019), June 201

    Application of the sequence planner control framework to an intelligent automation system with a focus on error handling

    Get PDF
    Future automation systems are likely to include devices with a varying degree of autonomy, as well as advanced algorithms for perception and control. Human operators will be expected to work side by side with both collaborative robots performing assembly tasks and roaming robots that handle material transport. To maintain the flexibility provided by human operators when introducing such robots, these autonomous robots need to be intelligently coordinated, i.e., they need to be supported by an intelligent automation system. One challenge in developing intelligent automation systems is handling the large amount of possible error situations that can arise due to the volatile and sometimes unpredictable nature of the environment. Sequence Planner is a control framework that supports the development of intelligent automation systems. This paper describes Sequence Planner and tests its ability to handle errors that arise during execution of an intelligent automation system. An automation system, developed using Sequence Planner, is subjected to a number of scenarios where errors occur. The error scenarios and experimental results are presented along with a discussion of the experience gained in trying to achieve robust intelligent automation
    • …
    corecore