376 research outputs found

    Exploring Alternative Control Modalities for Unmanned Aerial Vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs), commonly known as drones, are defined by the International Civil Aviation Organization (ICAO) as an aircraft without a human pilot on board. They are currently utilized primarily in the defense and security sectors but are moving towards the general market in surprisingly powerful and inexpensive forms. While drones are presently restricted to non-commercial recreational use in the USA, it is expected that they will soon be widely adopted for both commercial and consumer use. Potentially, UAVs can revolutionize various business sectors including private security, agricultural practices, product transport and maybe even aerial advertising. Business Insider foresees that 12% of the expected $98 billion cumulative global spending on aerial drones through the following decade will be for business purposes.[28] At the moment, most drones are controlled by some sort of classic joystick or multitouch remote controller. While drone manufactures have improved the overall controllability of their products, most drones shipped today are still quite challenging for inexperienced users to pilot. In order to help mitigate the controllability challenges and flatten the learning curve, gesture controls can be utilized to improve piloting UAVs. The purpose of this study was to develop and evaluate an improved and more intuitive method of flying UAVs by supporting the use of hand gestures, and other non-traditional control modalities. The goal was to employ and test an end-to-end UAV system that provides an easy-to-use control interface for novice drone users. The expectation was that by implementing gesture-based navigation, the novice user will have an overall enjoyable and safe experience quickly learning how to navigate a drone with ease, and avoid losing or damaging the vehicle while they are on the initial learning curve. During the course of this study we have learned that while this approach does offer lots of promise, there are a number of technical challenges that make this problem much more challenging than anticipated. This thesis details our approach to the problem, analyzes the user data we collected, and summarizes the lessons learned

    An Exploration Of Unmanned Aerial Vehicle Direct Manipulation Through 3d Spatial Interaction

    Get PDF
    We present an exploration that surveys the strengths and weaknesses of various 3D spatial interaction techniques, in the context of directly manipulating an Unmanned Aerial Vehicle (UAV). Particularly, a study of touch- and device- free interfaces in this domain is provided. 3D spatial interaction can be achieved using hand-held motion control devices such as the Nintendo Wiimote, but computer vision systems offer a different and perhaps more natural method. In general, 3D user interfaces (3DUI) enable a user to interact with a system on a more robust and potentially more meaningful scale. We discuss the design and development of various 3D interaction techniques using commercially available computer vision systems, and provide an exploration of the effects that these techniques have on an overall user experience in the UAV domain. Specific qualities of the user experience are targeted, including the perceived intuition, ease of use, comfort, and others. We present a complete user study for upper-body gestures, and preliminary reactions towards 3DUI using hand-and-finger gestures are also discussed. The results provide evidence that supports the use of 3DUI in this domain, as well as the use of certain styles of techniques over others

    Towards a situated, multimodal interface for multiple UAV control

    Get PDF
    Multiple autonomous Unmanned Aerial Vehicles (UAVs) can be used to complement human teams. This paper presents the results of an exploratory study to investigate gesture/ speech interfaces for interaction with robots in a situated manner and the development of three iterations of a prototype command set. A command set was compiled from observing users interacting with a simulated interface in a virtual reality environment. We discovered that users find this type of interface intuitive and their commands tend to naturally group into both 'High-Level' and 'Low-Level' instructions. However, as the robots moved further away, the loss of depth perception and direct feedback was inimical to the interaction. In a second experiment we found that using simple heads up display elements could mitigate these issues. ©2010 IEEE

    A Control Architecture for Unmanned Aerial Vehicles Operating in Human-Robot Team for Service Robotic Tasks

    Get PDF
    In this thesis a Control architecture for an Unmanned Aerial Vehicle (UAV) is presented. The aim of the thesis is to address the problem of control a flying robot operating in human robot team at different level of abstraction. For this purpose, three different layers in the design of the architecture were considered, namely, the high level, the middle level and the low level layers. The special case of an UAV operating in service robotics tasks and in particular in Search&Rescue mission in alpine scenario is considered. Different methodologies for each layer are presented with simulated or real-world experimental validation

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    A Robot is a Smart Tool: Investigating Younger Users' Preferences for the Multimodal Interaction of Domestic Service Robot

    Get PDF
    The degree that domestic service robots are generally accepted mainly depends on the user experience and the surprise that the design brings to people. To make the design of robots to follow the trend of interactions of smart devices, researchers should have insights into young people's acceptance and opinions of emerging new interactions. The main content of this study is a user elicitation through which the users' suggestions for commanding a robot in specific contexts are gathered. Accordingly, it sheds light on the features of user preferences for human-robot interaction. This study claims that younger users regard service robots merely as intelligent tools, which is the direct cause of the above interaction preferences. Keywords: Service robot, Interaction design, User preferenc

    Natural User Interfaces for Human-Drone Multi-Modal Interaction

    Get PDF
    Personal drones are becoming part of every day life. To fully integrate them into society, it is crucial to design safe and intuitive ways to interact with these aerial systems. The recent advances on User-Centered Design (UCD) applied to Natural User Interfaces (NUIs) intend to make use of human innate features, such as speech, gestures and vision to interact with technology in the way humans would with one another. In this paper, a Graphical User Interface (GUI) and several NUI methods are studied and implemented, along with computer vision techniques, in a single software framework for aerial robotics called Aerostack which allows for intuitive and natural human-quadrotor interaction in indoor GPS-denied environments. These strategies include speech, body position, hand gesture and visual marker interactions used to directly command tasks to the drone. The NUIs presented are based on devices like the Leap Motion Controller, microphones and small size monocular on-board cameras which are unnoticeable to the user. Thanks to this UCD perspective, the users can choose the most intuitive and effective type of interaction for their application. Additionally, the strategies proposed allow for multi-modal interaction between multiple users and the drone by being able to integrate several of these interfaces in one single application as is shown in various real flight experiments performed with non-expert users

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication
    • …
    corecore