797 research outputs found

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    An Augmented Reality Human-Robot Collaboration System

    Get PDF
    InvitedThis article discusses an experimental comparison of three user interface techniques for interaction with a remotely located robot. A typical interface for such a situation is to teleoperate the robot using a camera that displays the robot's view of its work environment. However, the operator often has a difficult time maintaining situation awareness due to this single egocentric view. Hence, a multimodal system was developed enabling the human operator to view the robot in its remote work environment through an augmented reality interface, the augmented reality human-robot collaboration (AR-HRC) system. The operator uses spoken dialogue, reaches into the 3D representation of the remote work environment and discusses intended actions of the robot. The result of the comparison was that the AR-HRC interface was found to be most effective, increasing accuracy by 30%, while reducing the number of close calls in operating the robot by factors of ~3x. It thus provides the means to maintain spatial awareness and give the users the feeling of working in a true collaborative environment

    The Underpinnings of Workload in Unmanned Vehicle Systems

    Get PDF
    This paper identifies and characterizes factors that contribute to operator workload in unmanned vehicle systems. Our objective is to provide a basis for developing models of workload for use in design and operation of complex human-machine systems. In 1986, Hart developed a foundational conceptual model of workload, which formed the basis for arguably the most widely used workload measurement techniquethe NASA Task Load Index. Since that time, however, there have been many advances in models and factor identification as well as workload control measures. Additionally, there is a need to further inventory and describe factors that contribute to human workload in light of technological advances, including automation and autonomy. Thus, we propose a conceptual framework for the workload construct and present a taxonomy of factors that can contribute to operator workload. These factors, referred to as workload drivers, are associated with a variety of system elements including the environment, task, equipment and operator. In addition, we discuss how workload moderators, such as automation and interface design, can be manipulated in order to influence operator workload. We contend that workload drivers, workload moderators, and the interactions among drivers and moderators all need to be accounted for when building complex, human-machine systems

    Drone methodologies: Taking flight in human and physical geography

    Get PDF
    This is the author accepted manuscript. The final version is available from Wiley via the DOI in this recordThe world of late seems oversaturated with stories about drones. These suddenly pervasive machines straddle a divide in geography, being simultaneously an important tool for proximal sensing in physical geography and technology with military origins that human geographers have critically engaged. This paper, a collaboration between a physical and a human geographer, is an exploration of the epistemological nexus that a critical drone methodology offers the discipline, and which we suggest provides a new opportunity for collaborative human/physical geography. Drawing on our own research with drones and that of others, we demonstrate how recent scholarship on vertical geographies and longstanding remote-sensing frameworks are challenged by drone methodologies where social, environmental and technological concerns are entangled with the politics of access to proximal airspace and, in doing so, define a new conceptual atmospheric zone within the Earth's atmospheric boundary layer – the “Nephosphere” – where drone experimentation occurs. We argue that engagement with non-military uses of drones is crucial for the discipline, now that we are entering an uncertain aerial future that will be replete with flying robots, and suggest drones are reconfiguring geographic imaginations. In short, we call on geographers to participate actively in the shaping of new drone methodologies where the values and perils of the technology can be critically debated from the starting point of the experiential, rather than the speculative

    Towards the use of unmanned aerial systems for providing sustainable services in smart cities

    Get PDF
    La sostenibilidad está en el centro de muchos campos de aplicación en los que el uso de los sistemas aéreos no tripulados (SUA) es cada vez más importante (por ejemplo, la agricultura, la detección y predicción de incendios, la vigilancia ambiental, la cartografía, etc.). Sin embargo, su uso y evolución están muy condicionados por el campo de aplicación específico para el que están diseñados y, por lo tanto, no pueden ser fácilmente reutilizados entre los diferentes campos de aplicación. Desde este punto de vista, al no ser polivalentes, podemos decir que no son totalmente sostenibles. Teniendo esto en cuenta, el objetivo de este trabajo es doble: por un lado, identificar el conjunto de características que debe proporcionar un UAS para ser considerado sostenible y demostrar que no hay ningún UAS que satisfaga todas estas características; por otra parte, presentar una arquitectura abierta y sostenible de los UAS que pueda utilizarse para construir UAS a petición para proporcionar las características necesarias en cada campo de aplicación. Dado que esta arquitectura se basa principalmente en la adaptabilidad del software y el hardware, contribuye a la sostenibilidad técnica de las ciudades.Sustainability is at the heart of many application fields where the use of Unmanned Aerial Systems (UAS) is becoming more and more important (e.g., agriculture, fire detection and prediction, environmental surveillance, mapping, etc.). However, their usage and evolution are highly conditioned by the specific application field they are designed for, and thus, they cannot be easily reused among different application fields. From this point of view, being that they are not multipurpose, we can say that they are not fully sustainable. Bearing this in mind, the objective of this paper is two-fold: on the one hand, to identify the whole set of features that must be provided by a UAS to be considered sustainable and to show that there is no UAS satisfying all these features; on the other hand, to present an open and sustainable UAS architecture that may be used to build UAS on demand to provide the features needed in each application field. Since this architecture is mainly based on software and hardware adaptability, it contributes to the technical sustainability of cities.• Ministerio de Economía y Competitividad y Fondos FEDER. Proyecto TIN2015-69957-R (I+D+i) • Junta de Extremadura y Fondo Europeo de Desarrollo Regional. Ayuda GR15098 y IB16055 • Parcialmente financiado por Interreg V-A España-Portugal (POCTEP) 2014-2020 program. Proyecto 0045-4IE-4-PpeerReviewe

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    Get PDF
    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation
    corecore