3,253 research outputs found

    Adoption of vehicular ad hoc networking protocols by networked robots

    Get PDF
    This paper focuses on the utilization of wireless networking in the robotics domain. Many researchers have already equipped their robots with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterparts. Typically, this integration of wireless communication is tackled in a quite pragmatic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) protocols that were designed specifically with robotic use cases in mind. This is in sharp contrast with the domain of vehicular ad hoc networks (VANET). This observation is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progress in the field of networked robots. To investigate this possibility, this paper provides a thorough overview of the related work in the domain of robotic and vehicular ad hoc networks. Based on this information, an exhaustive list of requirements is defined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while RANET protocols have to support high throughput media streaming as well. Although not always with equal importance, all other defined requirements are valid for both protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this paper concludes with the definition of an appropriate working plan

    Asynchronous displays for multi-UV search tasks

    Get PDF
    Synchronous video has long been the preferred mode for controlling remote robots with other modes such as asynchronous control only used when unavoidable as in the case of interplanetary robotics. We identify two basic problems for controlling multiple robots using synchronous displays: operator overload and information fusion. Synchronous displays from multiple robots can easily overwhelm an operator who must search video for targets. If targets are plentiful, the operator will likely miss targets that enter and leave unattended views while dealing with others that were noticed. The related fusion problem arises because robots' multiple fields of view may overlap forcing the operator to reconcile different views from different perspectives and form an awareness of the environment by "piecing them together". We have conducted a series of experiments investigating the suitability of asynchronous displays for multi-UV search. Our first experiments involved static panoramas in which operators selected locations at which robots halted and panned their camera to capture a record of what could be seen from that location. A subsequent experiment investigated the hypothesis that the relative performance of the panoramic display would improve as the number of robots was increased causing greater overload and fusion problems. In a subsequent Image Queue system we used automated path planning and also automated the selection of imagery for presentation by choosing a greedy selection of non-overlapping views. A fourth set of experiments used the SUAVE display, an asynchronous variant of the picture-in-picture technique for video from multiple UAVs. The panoramic displays which addressed only the overload problem led to performance similar to synchronous video while the Image Queue and SUAVE displays which addressed fusion as well led to improved performance on a number of measures. In this paper we will review our experiences in designing and testing asynchronous displays and discuss challenges to their use including tracking dynamic targets. © 2012 by the American Institute of Aeronautics and Astronautics, Inc

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Scalable target detection for large robot teams

    Get PDF
    In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operator's workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators. Copyright 2011 ACM

    An evolutionary algorithm for online, resource constrained, multi-vehicle sensing mission planning

    Full text link
    Mobile robotic platforms are an indispensable tool for various scientific and industrial applications. Robots are used to undertake missions whose execution is constrained by various factors, such as the allocated time or their remaining energy. Existing solutions for resource constrained multi-robot sensing mission planning provide optimal plans at a prohibitive computational complexity for online application [1],[2],[3]. A heuristic approach exists for an online, resource constrained sensing mission planning for a single vehicle [4]. This work proposes a Genetic Algorithm (GA) based heuristic for the Correlated Team Orienteering Problem (CTOP) that is used for planning sensing and monitoring missions for robotic teams that operate under resource constraints. The heuristic is compared against optimal Mixed Integer Quadratic Programming (MIQP) solutions. Results show that the quality of the heuristic solution is at the worst case equal to the 5% optimal solution. The heuristic solution proves to be at least 300 times more time efficient in the worst tested case. The GA heuristic execution required in the worst case less than a second making it suitable for online execution.Comment: 8 pages, 5 figures, accepted for publication in Robotics and Automation Letters (RA-L

    Asynchronous control with ATR for large robot teams

    Get PDF
    In this paper, we discuss and investigate the advantages of an asynchronous display, called "image queue", tested for an urban search and rescue foraging task. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment by selecting a small number of images that together cover large portions of the area searched. This asynchronous approach allows operators to search through a large amount of data gathered by autonomous robot teams, and allows comprehensive and scalable displays to obtain a network-centric perspective for unmanned ground vehicles (UGVs). In the reported experiment automatic target recognition (ATR) was used to augment utilities based on visual coverage in selecting imagery for presentation to the operator. In the cued condition a box was drawn in the region in which a possible target was detected. In the no-cue condition no box was drawn although the target detection probability continued to play a role in the selection of imagery. We found that operators using the image queue displays missed fewer victims and relied on teleoperation less often than those using streaming video. Image queue users in the no-cue condition did better in avoiding false alarms and reported lower workload than those in the cued condition. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Future Roles for Autonomous Vertical Lift in Disaster Relief and Emergency Response

    Get PDF
    System analysis concepts are applied to the assessment of potential collaborative contributions of autonomous system and vertical lift (a.k.a. rotorcraft, VTOL, powered-lift, etc.) technologies to the important, and perhaps underemphasized, application domain of disaster relief and emergency response. In particular, an analytic framework is outlined whereby system design functional requirements for an application domain can be derived from defined societal good goals and objectives

    A Survey of research in Deep Learning for Robotics for Undergraduate research interns

    Full text link
    Over the last several years, use cases for robotics based solutions have diversified from factory floors to domestic applications. In parallel, Deep Learning approaches are replacing traditional techniques in Computer Vision, Natural Language Processing, Speech processing, etc. and are delivering robust results. Our goal is to survey a number of research internship projects in the broad area of 'Deep Learning as applied to Robotics' and present a concise view for the benefit of aspiring student interns. In this paper, we survey the research work done by Robotic Institute Summer Scholars (RISS), CMU. We particularly focus on papers that use deep learning to solve core robotic problems and also robotic solutions. We trust this would be useful particularly for internship aspirants for the Robotics Institute, CMUComment: This document is a draft version at this stage and the final version will be created soo
    corecore