5 research outputs found

    Robot control based on qualitative representation of human trajectories

    Get PDF
    A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn

    QTC3D: extending the qualitative trajectory calculus to three dimensions

    Get PDF
    Spatial interactions between agents (humans, animals, or machines) carry information of high value to human or electronic observers. However, not all the information contained in a pair of continuous trajectories is important and thus the need for qualitative descriptions of interaction trajectories arises. The Qualitative Trajectory Calculus (QTC) (Van de Weghe, 2004) is a promising development towards this goal. Numerous variants of QTC have been proposed in the past and QTC has been applied towards analyzing various interaction domains. However, an inherent limitation of those QTC variations that deal with lateral movements is that they are limited to two-dimensional motion; therefore, complex three-dimensional interactions, such as those occurring between flying planes or birds, cannot be captured. Towards that purpose, in this paper QTC3D is presented: a novel qualitative trajectory calculus that can deal with full three-dimensional interactions. QTC3D is based on transformations of the Frenet-Serret frames accompanying the trajectories of the moving objects. Apart from the theoretical exposition, including definition and properties, as well as computational aspects, we also present an application of QTC3D towards modeling bird flight. Thus, the power of QTC is now extended to the full dimensionality of physical space, enabling succinct yet rich representations of spatial interactions between agents

    Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human-Robot Collaboration

    Get PDF
    Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration

    Razvoj hijerarhijske strukture upravljanja mobilnim robotom za praćenje ljudi na bazi robusne stereo robotske vizije

    Get PDF
    The main topic of this doctoral thesis refers to the development and implementation of the hierarchical control structure in which the algorithms are being executed on the high-level control. By applying the stochastic methods in robotic vision to these algorithms, we can detect people, estimate their position, follow them and recognise their actions in order to carry out tasks where the robot behaves like a human's collaborator. In this thesis, some solutions are offered and present a step forward towards solving the problems that robotic vision system that provides reliable inputs to the control module of the mobile human-collaboration robot, is facing. The robust vision module for human tracking which can be applied in various applications where is necessary for robots to work together with humans and which can be applied on different types of mobile robots, was developed. In this thesis, it is devoted a special attention to the integration, testing and experimental verification of the stochastic algorithms for human tracking such as Kaman and Particle filters, as well as a comparative analysis of algorithms for solving problems of robotic people tracking. A part of the research presented in this thesis is based on a scientific collaboration between the researchers from the Faculty of Mechanical Engineering of Nis and the researchers from the Institute of Automation (IAT) of the University of Bremen. The stereo vision module for a human detection that was developed at the Institute of Automatic Control, University of Bremen (IAT), was used for testing the tracking module which was developed in this thesis. Beside the detection system, the systems for human detection that use 3D sensors, such as the Asus Xtion PRO LIVE 3D sensor were used. The main focus of this thesis was the development of a simulation environment and its control system, as well as the development of modules for human tracking, estimation of a human position and recognition of a human behavior. The simulation environment represents the support to the development and implementation of the real world control system. By adding the appropriate modifications, other mobile robots can easily use this simulation environment. The developed algorithms are evaluated on Faculty of Mechanical Engineering, University of Niš as a part of the doctoral dissertation. In this dissertation, advanced hierarchical control was implemented for the purpose of controlling the mobile robot DaNI, developed by the company National Instruments. This advanced hierarchical control was implemented by using 3D sensor Asus Xtion Pro Live which in the laboratory experimental scenario represents the robotic vision sensor for the detection modules and human tracking. In addition, at the IAT, the vision module which consists of two sub-modules was implemented. These two sub-modules are the stereo vision for a human detection and the tracking module based on the Kalman filter developed in this doctoral thesis

    Robot control based on qualitative representation of human trajectories

    No full text
    A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved
    corecore