688 research outputs found

    A Planning Pipeline for Large Multi-Agent Missions

    Get PDF
    In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed

    The Underpinnings of Workload in Unmanned Vehicle Systems

    Get PDF
    This paper identifies and characterizes factors that contribute to operator workload in unmanned vehicle systems. Our objective is to provide a basis for developing models of workload for use in design and operation of complex human-machine systems. In 1986, Hart developed a foundational conceptual model of workload, which formed the basis for arguably the most widely used workload measurement techniquethe NASA Task Load Index. Since that time, however, there have been many advances in models and factor identification as well as workload control measures. Additionally, there is a need to further inventory and describe factors that contribute to human workload in light of technological advances, including automation and autonomy. Thus, we propose a conceptual framework for the workload construct and present a taxonomy of factors that can contribute to operator workload. These factors, referred to as workload drivers, are associated with a variety of system elements including the environment, task, equipment and operator. In addition, we discuss how workload moderators, such as automation and interface design, can be manipulated in order to influence operator workload. We contend that workload drivers, workload moderators, and the interactions among drivers and moderators all need to be accounted for when building complex, human-machine systems

    Towards human-friendly efficient control of multi-robot teams

    Get PDF
    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    A Predictive Model for Human-Unmanned Vehicle Systems : Final Report

    Get PDF
    Advances in automation are making it possible for a single operator to control multiple unmanned vehicles (UVs). This capability is desirable in order to reduce the operational costs of human-UV systems (HUVS), extend human capabilities, and improve system effectiveness. However, the high complexity of these systems introduces many significant challenges to system designers. To help understand and overcome these challenges, high-fidelity computational models of the HUVS must be developed. These models should have two capabilities. First, they must be able to describe the behavior of the various entities in the team, including both the human operator and the UVs in the team. Second, these models must have the ability to predict how changes in the HUVS and its mission will alter the performance characteristics of the system. In this report, we describe our work toward developing such a model. Via user studies, we show that our model has the ability to describe the behavior of a HUVS consisting of a single human operator and multiple independent UVs with homogeneous capabilities. We also evaluate the model’s ability to predict how changes in the team size, the human-UV interface, the UV’s autonomy levels, and operator strategies affect the system’s performance.Prepared for MIT Lincoln Laborator

    A Control Architecture for Unmanned Aerial Vehicles Operating in Human-Robot Team for Service Robotic Tasks

    Get PDF
    In this thesis a Control architecture for an Unmanned Aerial Vehicle (UAV) is presented. The aim of the thesis is to address the problem of control a flying robot operating in human robot team at different level of abstraction. For this purpose, three different layers in the design of the architecture were considered, namely, the high level, the middle level and the low level layers. The special case of an UAV operating in service robotics tasks and in particular in Search&Rescue mission in alpine scenario is considered. Different methodologies for each layer are presented with simulated or real-world experimental validation

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    AQUA-G: a universal gesture recognition framework

    Get PDF
    In this thesis, I describe a software architecture and implementation which is designed to ease the process of 1) developing gesture-enabled applications and 2) using multiple disparate interaction devices simultaneously to create gestures. Developing gesture-enabled applications from scratch can be a time-consuming process involving obtaining input from novel input devices, processing that input in order to recognize gestures, and connecting this information to the application. Previously, developers have turned to gesture recognition systems to assist them in developing these applications. However, existing systems to date are limited in flexibility and adaptability. I propose AQUA-G, a universal gesture recognition framework that utilizes a unified event architecture to communicate with a limitless variety of input devices. AQUA-G provides abstraction of gesture recognition and allows developers to write custom gestures. Its features have been driven in part by previous architectures and are partially based on a needs assessment with a sample of developers. This research contributes a scalable and reliable software system for gesture-enabled application development, which makes developing and prototyping novel interaction styles more accessible to a larger development community

    The Effect of Pilot and Air Traffic Control Experiences & Automation Management Strategies on UAS Mission Task Performance

    Get PDF
    Unmanned aircraft are relied on now more than ever to save lives and support the troops in the recent Operation Enduring Freedom and Operation Iraqi Freedom. The demands for UAS capabilities are rapidly increasing in the civilian sector. However, UAS operations will not be carried out in the NAS until safety concerns are alleviated. Among these concerns is determining the appropriate level of automation in conjunction with a suitable pilot who exhibits the necessary knowledge, skills, and abilities to safely operate these systems. This research examined two levels of automation: Management by Consent (MBC) and Management by Exception (MBE). User experiences were also analyzed in conjunction with both levels of automation while operating an unmanned aircraft simulator. The user experiences encompass three individual groups: Pilots, ATC, and Human Factors. Performance, workload, and situation awareness data were examined, but did not show any significant differences among the groups. Shortfalls and constraints are heavily examined to help pave the wave for future research
    • …
    corecore