1,557 research outputs found

    A Distributed Control Architecture for Collaborative Multi-Robot Task Allocation

    Get PDF
    This thesis addresses the problem of task allocation for multi-robot systems that perform tasks with complex, hierarchical representations which contain different types of ordering constraints and multiple paths of execution. We propose a distributed multi-robot control architecture that addresses the above challenges and makes the following contributions: i) it allows for online, dynamic allocation of robots to various steps of the task, ii) it ensures that the collaborative robot system will obey all of the task constraints and iii) it allows for opportunistic, flexible task execution given different environmental conditions. This architecture uses a distributed messaging system to allow the robots to communicate. Each robot uses its own state and team member states to keep track of the progress on a given task and identify which sub-tasks to perform next using an activation spreading mechanism. We demonstrate the proposed architecture on a team of two humanoid robots (a Baxter and a PR2) performing hierarchical tasks

    Cognitive Approach to Hierarchical Task Selection for Human-Robot Interaction in Dynamic Environments

    Full text link
    In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying "what to do" in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make "tea" and "sandwich". We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says "I am thirsty" or "It is cold outside" the robot will start to perform the tea-making skill. In contrast, if the person says, "I am hungry" or "I need something to eat", the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill.Comment: To Appear In International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, Oct 202

    Distributed Dynamic Hierarchical Task Assignment for Human-Robot Teams

    Get PDF
    This work implements a joint task architecture for human-robot collaborative task execution using a hierarchical task planner. This architecture allowed humans and robots to work together as teammates in the same environment while following several task constraints. These constraints are 1) sequential order, 2) non-sequential, and 3) alternative execution constraints. Both the robot and the human are aware of each other's current state and allocate their next task based on the task tree. On-table tasks, such as setting up a tea table or playing a color sequence matching game, validate the task architecture. The robot will have an updated task representation of its human teammate's task. Using this knowledge, it is also able to continuously detect the human teammate's intention towards each sub-task and coordinate it with the teammate. While performing a joint task, there can be situations in which tasks overlap or do not overlap. We designed a dialogue-based conversation between humans and robots to resolve conflict in the case of overlapping tasks.Evaluating the human-robot task architecture is the next concern after validating the task architecture. Trust and trustworthiness are some of the most critical metrics to explore. A study was conducted between humans and robots to create a homophily situation. Homophily means when a person feels biased towards another person because of having similarities in social ways. We conducted this study to determine whether humans can form a homophilic relationship with robots and whether there is a connection between homophily and trust. We found a correlation between homophily and trust in human-robot interactions.Furthermore, we designed a pipeline by which the robot learns a task by observing the human teammate's hand movement while conversing. The robot then constructs the tree by itself using a GA learning framework. Thus removing the need for manual specification by a programmer each time to revise or update the task tree which makes the architecture more flexible, realistic, efficient, and dynamic. Additionally, our architecture allows the robot to comprehend the context of a situation by conversing with a human teammate and observing the surroundings. The robot can find a link between the context of the situation and the surrounding objects by using the ontology approach and can perform the desired task accordingly. Therefore, we proposed a human-robot distributed joint task management architecture that addresses design, improvement, and evaluation under multiple constraints

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    A Simulation-Based Layered Framework Framework for the Development of Collaborative Autonomous Systems

    Get PDF
    The purpose of this thesis is to introduce a simulation-based software framework that facilitates the development of collaborative autonomous systems. Significant commonalities exist in the design approaches of both collaborative and autonomous systems, mirroring the sense, plan, act paradigm, and mostly adopting layered architectures. Unfortunately, the development of such systems is intricate and requires low-level interfacing which significantly detracts from development time. Frameworks for the development of collaborative and autonomous systems have been developed but are not flexible and center on narrow ranges of applications and platforms. The proposed framework utilizes an expandable layered structure that allows developers to define a layered structure and perform isolated development on different layers. The framework provides communication capabilities and allows message definition in order to define collaborative behavior across various applications. The framework is designed to be compatible with many robotic platforms and utilizes the concept of robotic middleware in order to interface with robots; attaching the framework on different platforms only requires changing the middleware. An example Fire Brigade application that was developed in the framework is presented; highlighting the design process and utilization of framework related features. The application is simulation-based, relying on kinematic models to simulate physical actions and a virtual environment to provide access to sensor data. While the results demonstrated interesting collaborative behavior, the ease of implementation and capacity to experiment by swapping layers is particularly noteworthy. The framework retains the advantages of layered architectures and provides greater flexibility, shielding developers from intricacies and providing enough tools to make collaboration easy to perform

    Dynamic networks for robotic control and behaviour selection in interactive environments

    Get PDF
    Traditional robotics have the capabilities of the robot hard coded and have the robot function in structured environments (structured environments are those that are predefined for a given task). This approach can limit the functionality of a robot and how they can interact in an environment. Behaviour networks are reactive systems that are able to function in unstructured dynamic environments by selecting behaviours to execute based on the current state of the environment. Behaviour networks are made up of nodes that represent behaviours and these store an activation value to represent the motivation for that behaviour. The nodes receive inputs from a variety of sources and pass proportions of that input to other nodes in the network.Behaviour networks traditionally also have their capabilities predefined. The main aim of this thesis is to expand upon the concepts of traditional robotics by demonstrating the use of distributed behaviours in an environment. This thesis aims to show that distributing object specific data, such as; behaviours and goals, will assist in the task planning for a mobile robot.This thesis explores and tests the traditional behaviour network with a variety of experiments. Each experiment showcases particular features of the behaviour network including flaws that have been identified. Proposed solutions to the found flaws are then presented and explored. The behaviour network is then tested in a simulated environment with distributed behaviours and the dynamic behaviour network is defined. The thesis demonstrates that distributed behaviours can expand the capabilities of a mobile robot using a dynamic behaviour network

    Action intention recognition for proactive human assistance in domestic environments

    Get PDF
    The current Master’s Thesis in Automatics, Control and Robotics covers the development and implementation of an Action Intention Recognition algorithm for proactive human assistance in domestic environments. The proposed solution is based on the use of data provided by a real time RGBD Object Recognition process which captures object state changes inside a defined region of interest of the domestic environment setup. A background analysis is performed to analyze state of the art approaches to both real time RGBD object recognition and action intention recognition methods. The preliminary analysis serves as the base for the proposal of a new volume descriptor for object categorization and an improved formalism for Activation Spreading Networks in the context of action intention recognition. Several tests are performed to study the performance of the proposed solution and its results are analyzed to define the conclusions of the project and propose future work. Finally, the project budget and environmental impact as well as the project schedule are presented and briefly discusse

    Affective Motivational Collaboration Theory

    Get PDF
    Existing computational theories of collaboration explain some of the important concepts underlying collaboration, e.g., the collaborators\u27 commitments and communication. However, the underlying processes required to dynamically maintain the elements of the collaboration structure are largely unexplained. Our main insight is that in many collaborative situations acknowledging or ignoring a collaborator\u27s affective state can facilitate or impede the progress of the collaboration. This implies that collaborative agents need to employ affect-related processes that (1) use the collaboration structure to evaluate the status of the collaboration, and (2) influence the collaboration structure when required. This thesis develops a new affect-driven computational framework to achieve these objectives and thus empower agents to be better collaborators. Contributions of this thesis are: (1) Affective Motivational Collaboration (AMC) theory, which incorporates appraisal processes into SharedPlans theory. (2) New computational appraisal algorithms based on collaboration structure. (3) Algorithms such as goal management, that use the output of appraisal to maintain collaboration structures. (4) Implementation of a computational system based on AMC theory. (5) Evaluation of AMC theory via two user studies to a) validate our appraisal algorithms, and b) investigate the overall functionality of our framework within an end-to-end system with a human and a robot

    Challenging the Computational Metaphor: Implications for How We Think

    Get PDF
    This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think
    • …
    corecore