13,229 research outputs found

    Situation-Specific Intention Recognition for Human-Robot-Cooperation

    Get PDF
    Recognizing human intentions is part of the decision process in many technical devices. In order to achieve natural interaction, the required estimation quality and the used computation time need to be balanced. This becomes challenging, if the number of sensors is high and measurement systems are complex. In this paper, a model predictive approach to this problem based on online switching of small, situation-specific Dynamic Bayesian Networks is proposed. The contributions are an efficient modeling and inference of situations and a greedy model predictive switching algorithm maximizing the mutual information of predicted situations. The achievable accuracy and computational savings are demonstrated for a household scenario by using an extended range telepresence system

    On inferring intentions in shared tasks for industrial collaborative robots

    Get PDF
    Inferring human operators' actions in shared collaborative tasks, plays a crucial role in enhancing the cognitive capabilities of industrial robots. In all these incipient collaborative robotic applications, humans and robots not only should share space but also forces and the execution of a task. In this article, we present a robotic system which is able to identify different human's intentions and to adapt its behavior consequently, only by means of force data. In order to accomplish this aim, three major contributions are presented: (a) force-based operator's intent recognition, (b) force-based dataset of physical human-robot interaction and (c) validation of the whole system in a scenario inspired by a realistic industrial application. This work is an important step towards a more natural and user-friendly manner of physical human-robot interaction in scenarios where humans and robots collaborate in the accomplishment of a task.Peer ReviewedPostprint (published version

    Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach

    Full text link
    We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency

    The development of test action bank for active robot learning

    Get PDF
    A thesis submitted to the University of Bedfordshire, in fulfilment of the requirements for the degree of Master of Science by researchIn the rapidly expanding service robotics research area, interactions between robots and humans become increasingly cornmon as more and more jobs will require cooperation between the robots and their human users. It is important to address cooperation between a robot and its user. ARL is a promising approach which facilitates a robot to develop high-order beliefs by actively performing test actions in order to obtain its user's intention from his responses to the actions. Test actions are crucial to ARL. This study carried out primary research on developing a Test Action Bank (TAB) to provide test actions for ARL. In this study, a verb-based task classifier was developed to extract tasks from user's commands. Taught tasks and their corresponding test actions were proposed and stored in database to establish the TAB. A backward test actions retrieval method was used to locate a task in a task tree and retrieve its test actions from TAB. A simulation environment was set up with a service robot model and a user model to test TAB and demonstrate some test actions. Simulations were also perfonned in this study, the simulation results proved TAB can successfully provide test actions according to different tasks and the proposed service robot model can demonstrate test actions
    corecore