946 research outputs found

    Teleoperation of a service robot using a mobile device

    Get PDF
    Teleoperation is a concept born with the rapid evolution of technology, with an intuitive meaning "operate at a distance." The first teleoperation system was created in the mid 1950s, which were handled chemicals. Remote controlled systems are present nowadays in various types of applications. This dissertation presents the development of a mobile application to perform the teleoperation of a mobile service robot. The application integrates a distributed surveillance (the result of a research project QREN) and led to the development of a communication interface between the robot (the result of another QREN project) and the vigilance system. It was necessary to specify a communication protocol between the two systems, which was implemented over a communication framework 0MQ (Zero Message Queue). For the testing, three prototype applications were developed before to perform the test on the robot

    The Effects Of Video Frame Delay And Spatial Ability On The Operation Of Multiple Semiautonomous And Tele-operated Robots

    Get PDF
    The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation\u27s wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community

    An integrated task manager for virtual command and control

    Get PDF
    The Task Manager is a desktop/tablet PC interface to the Battlespace research project that provides interactions and displays for supervisory control of unmanned aerial vehicles. Utilizing a north-up map display, the Task Manager provides a direct-manipulation interface to the units involved in an engagement. Used in two primary modes, the Task Manager can be used either in a planning/review mode that can be used to generate mission scenarios or a live-streaming mode that connects to a live Battlespace simulation via a network connection to edit and update path information on the fly. The goal of this research is to combine the precision of 2D mouse and pen-based interaction with the increased situational awareness provided by 3D battlefield visualizations like the Battlespace application. Combined use of these interfaces, either by a single operator or a small team of operators with task-specific roles, is proposed to produce a more favorable ratio of operators to units in field operations with superior decision-making capabilities due to the specific nature of the interfaces

    SARSCEST (human factors)

    Get PDF
    People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed

    High latency unmanned ground vehicle teleoperation enhancement by presentation of estimated future through video transformation

    Get PDF
    Long-distance, high latency teleoperation tasks are difficult, highly stressful for teleoperators, and prone to over-corrections, which can lead to loss of control. At higher latencies, or when teleoperating at higher vehicle speed, the situation becomes progressively worse. To explore potential solutions, this research work investigates two 2D visual feedback-based assistive interfaces (sliding-only and sliding-and-zooming windows) that apply simple but effective video transformations to enhance teleoperation. A teleoperation simulator that can replicate teleoperation scenarios affected by high and adjustable latency has been developed to explore the effectiveness of the proposed assistive interfaces. Three image comparison metrics have been used to fine-tune and optimise the proposed interfaces. An operator survey was conducted to evaluate and compare performance with and without the assistance. The survey has shown that a 900ms latency increases task completion time by up to 205% for an on-road and 147 % for an off-road driving track. Further, the overcorrection-induced oscillations increase by up to 718 % with this level of latency. The survey has shown the sliding-only video transformation reduces the task completion time by up to 25.53 %, and the sliding-and-zooming transformation reduces the task completion time by up to 21.82 %. The sliding-only interface reduces the oscillation count by up to 66.28 %, and the sliding-and-zooming interface reduces it by up to 75.58 %. The qualitative feedback from the participants also shows that both types of assistive interfaces offer better visual situational awareness, comfort, and controllability, and significantly reduce the impact of latency and intermittency on the teleoperation task

    Basics of Geomatics

    Full text link

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    The use of modern tools for modelling and simulation of UAV with Haptic

    Get PDF
    Unmanned Aerial Vehicle (UAV) is a research field in robotics which is in high demand in recent years, although there still exist many unanswered questions. In contrast, to the human operated aerial vehicles, it is still far less used to the fact that people are dubious about flying in or flying an unmanned vehicle. It is all about giving the control right to the computer (which is the Artificial Intelligence) for making decisions based on the situation like human do but this has not been easy to make people understand that it’s safe and to continue the enhancement on it. These days there are many types of UAVs available in the market for consumer use, for applications like photography to play games, to map routes, to monitor buildings, for security purposes and much more. Plus, these UAVs are also being widely used by the military for surveillance and for security reasons. One of the most commonly used consumer product is a quadcopter or quadrotor. The research carried out used modern tools (i.e., SolidWorks, Java Net Beans and MATLAB/Simulink) to model controls system for Quadcopter UAV with haptic control system to control the quadcopter in a virtual simulation environment and in real time environment. A mathematical model for the controlling the quadcopter in simulations and real time environments were introduced. Where, the design methodology for the quadcopter was defined. This methodology was then enhanced to develop a virtual simulation and real time environments for simulations and experiments. Furthermore, the haptic control was then implemented with designed control system to control the quadcopter in virtual simulation and real time experiments. By using the mathematical model of quadcopter, PID & PD control techniques were used to model the control setup for the quadcopter altitude and motion controls as work progressed. Firstly, the dynamic model is developed using a simple set of equations which evolves further by using complex control & mathematical model with precise function of actuators and aerodynamic coefficients Figure5-7. The presented results are satisfying and shows that flight experiments and simulations of the quadcopter control using haptics is a novel area of research which helps perform operations more successfully and give more control to the operator when operating in difficult environments. By using haptic accidents can be minimised and the functional performance of the operator and the UAV will be significantly enhanced. This concept and area of research of haptic control can be further developed accordingly to the needs of specific applications
    • …
    corecore