2 research outputs found

    Distributed Dynamic Hierarchical Task Assignment for Human-Robot Teams

    Get PDF
    This work implements a joint task architecture for human-robot collaborative task execution using a hierarchical task planner. This architecture allowed humans and robots to work together as teammates in the same environment while following several task constraints. These constraints are 1) sequential order, 2) non-sequential, and 3) alternative execution constraints. Both the robot and the human are aware of each other's current state and allocate their next task based on the task tree. On-table tasks, such as setting up a tea table or playing a color sequence matching game, validate the task architecture. The robot will have an updated task representation of its human teammate's task. Using this knowledge, it is also able to continuously detect the human teammate's intention towards each sub-task and coordinate it with the teammate. While performing a joint task, there can be situations in which tasks overlap or do not overlap. We designed a dialogue-based conversation between humans and robots to resolve conflict in the case of overlapping tasks.Evaluating the human-robot task architecture is the next concern after validating the task architecture. Trust and trustworthiness are some of the most critical metrics to explore. A study was conducted between humans and robots to create a homophily situation. Homophily means when a person feels biased towards another person because of having similarities in social ways. We conducted this study to determine whether humans can form a homophilic relationship with robots and whether there is a connection between homophily and trust. We found a correlation between homophily and trust in human-robot interactions.Furthermore, we designed a pipeline by which the robot learns a task by observing the human teammate's hand movement while conversing. The robot then constructs the tree by itself using a GA learning framework. Thus removing the need for manual specification by a programmer each time to revise or update the task tree which makes the architecture more flexible, realistic, efficient, and dynamic. Additionally, our architecture allows the robot to comprehend the context of a situation by conversing with a human teammate and observing the surroundings. The robot can find a link between the context of the situation and the surrounding objects by using the ontology approach and can perform the desired task accordingly. Therefore, we proposed a human-robot distributed joint task management architecture that addresses design, improvement, and evaluation under multiple constraints

    Cognitive Approach to Hierarchical Task Selection for Human-Robot Interaction in Dynamic Environments

    Full text link
    In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying "what to do" in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make "tea" and "sandwich". We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says "I am thirsty" or "It is cold outside" the robot will start to perform the tea-making skill. In contrast, if the person says, "I am hungry" or "I need something to eat", the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill.Comment: To Appear In International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, Oct 202
    corecore