1,597 research outputs found

    Remote interaction with mobile robots

    Get PDF
    This paper describes an architecture, which can be used to build remote laboratories to interact remotely via Internet with mobile robots using different interaction devices. A supervisory control strategy has been used to develop the remote laboratory in order to alleviate high communication data rates and system sensitivity to network delays. The users interact with the remote system at a more abstract level using high level commands. The local robot's autonomy has been increased by encapsulating all the robot's behaviors in different types of skills. User interfaces have been designed using visual proxy pattern to facilitate any future extension or code reuse. The developed remote laboratory has been integrated into an educational environment in the field of indoor mobile robotics. This environment is currently being used as a part of an international project to develop a distributed laboratory for autonomous and teleoperated systems (IECAT, 2003).Publicad

    A Networking Framework for Multi-Robot Coordination

    Get PDF
    Autonomous robots operating in real environments need to be able to interact with a dynamic world populated with objects, people, and, in general, other agents. The current generation of autonomous robots, such as the ASIMO robot by Honda or the QRIO by Sony, has showed impressive performances in mechanics and control of movements; moreover, recent literature reports encouraging results about the capability of such robots of representing themselves with respect to a dynamic external world, of planning future actions and of evaluating resulting situations in order to make new plans. However, when multiple robots are supposed to operate together, coordination and communication issues arise; while noteworthy results have been achieved with respect to the control of a single robot, novel issues arise when the actions of a robot influence another''s behavior. The increase in computational power available to systems nowadays makes it feasible, and even convenient, to organize them into a single distributed computing environment in order to exploit the synergy among different entities. This is especially true for robot teams, where cooperation is supposed to be the most natural scheme of operation, especially when robots are required to operate in highly constrained scenarios, such as inhospitable sites, remote sites, or indoor environments where strict constraints on intrusiveness must be respected. In this case, computations will be inherently network-centric, and to solve the need for communication inside robot collectives, an efficient network infrastructure must be put into place; once a proper communication channel is established, multiple robots may benefit from the interaction with each other in order to achieve a common goal. The framework presented in this paper adopts a composite networking architecture, in which a hybrid wireless network, composed by commonly available WiFi devices, and the more recently developed wireless sensor networks, operates as a whole in order both to provide a communication backbone for the robots and to extract useful information from the environment. The ad-hoc WiFi backbone allows robots to exchange coordination information among themselves, while also carrying data measurements collected from surrounding environment, and useful for localization or mere data gathering purposes. The proposed framework is called RoboNet, and extends a previously developed robotic tour guide application (Chella et al., 2007) in the context of a multi-robot application; our system allows a team of robots to enhance their perceptive capabilities through coordination obtained via a hybrid communication network; moreover, the same infrastructure allows robots to exchange information so as to coordinate their actions in order to achieve a global common goal. The working scenario considered in this paper consists of a museum setting, where guided tours are to be automatically managed. The museum is arranged both chronologically and topographically, but the sequence of findings to be visited can be rearranged depending on user queries, making a sort of dynamic virtual labyrinth with various itineraries. Therefore, the robots are able to guide visitors both in prearranged tours and in interactive tours, built in itinere depending on the interaction with the visitor: robots are able to rebuild the virtual connection between findings and, consequently, the path to be followed. This paper is organized as follows. Section 2 contains some background on multi-robot coordination, and Section 3 describes the underlying ideas and the motivation behind the proposed architecture, whose details are presented in Sections 4, 5, and 6. A realistic application scenario is described in Section 7, and finally our conclusions are drawn in Section 8

    Virtual and Mixed Reality in Telerobotics: A Survey

    Get PDF

    Robot Tour Guide

    Get PDF
    Our goal was to create a friendly robot that could safely guide a user through a building. While creating our end product, we explored a number of fields including human-robot interaction and robot navigation. Thanks to Ava Robotics, we used one of their first generation drive bases as the core of our robot, PAT. We mounted a head, which contains a Jetson TX2, a camera, and a touch screen for user interaction. PAT can take users through a set of destinations efficiently either from voice commands or its graphical interface. If PAT is stuck, it can detect obstacles and ask for help to remove an obstacle in order to continue guiding the user. We have accomplished a basic tour guide using Ava Robotics’ base model, which created a stepping stone for future development

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    The MOLAM Project: The design of a context specific navigation framework for a mobile robot

    Get PDF
    Applied project submitted to the Department of Computer Science, Ashesi University College, in partial fulfillment of Bachelor of Science degree in Computer Science, April 2013Ashesi University College recently acquired a TurtleBot1 robotic platform to experiment with service robotics on campus. This platform runs the Robotics Operating System (ROS)2 for developing robotics applications. This MOLAM project (Motion, Localization And Mapping) sought to build a fundamental framework that extends ROS functionalities on a TurtleBot to navigate autonomously around the Ashesi campus in Berekuso. The focus of this project was to extend the ROS enabled functionalities on the TurtleBot for mapping, localization and motion planning to create a customized navigation framework suited for the Ashesi University campus. Having such a foundation, other service applications, such as a waiter robotic system, a courier robotic system, a tour guide robotic system etc., can be built on this framework. This paper outlines the various tools and processes used in assembling and configuring this robotic system. It also delineates the challenges encountered in assembling and configuring this robotics system, as well as the “work-arounds” devised to solve these challenges. Finally, this paper proposes design concepts for future work in using the TurtleBot system as a Tour Guide Robotic System for Ashesi University College.Ashesi University Colleg

    Guest Orientation, Assistance, and Telepresence Robot

    Get PDF
    The project was focused on a mobile research platform for autonomous navigation components and sensors vital to its autonomous interaction with its environment. The goal of this project was to create such a mobile robotic platform, which would in turn be capable of acting as a fully autonomous tour guide for the WPI campus. The project combined the robust capabilities of a Segway. Robotic Mobility Platform with the cutting edge adaptability of the Robot Operating System software framework. The robot will work in conjunction with school staff to provide video tour information as part of an enhanced tour experience. The project is a highly visible representation of WPI\u27s unique MQP program and its ability to prepare engineers capable of solving real world problems

    Guest Orientation, Assistance, and Telepresence Robot

    Get PDF
    The project was focused on a mobile research platform for autonomous navigation components and sensors vital to its autonomous interaction with its environment. The goal of this project was to create such a mobile robotic platform, which would in turn be capable of acting as a fully autonomous tour guide for the WPI campus. The project combined the robust capabilities of a Segway Robotic Mobility Platform with the cutting edge adaptability of the Robot Operating System software framework. The robot will work in conjunction with school staff to provide video tour information as part of an enhanced tour experience. The project is a highly visible representation of WPI\u27s unique MQP program and its ability to prepare engineers capable of solving real world problems

    Navigation, Path Planning, and Task Allocation Framework For Mobile Co-Robotic Service Applications in Indoor Building Environments

    Full text link
    Recent advances in computing and robotics offer significant potential for improved autonomy in the operation and utilization of today’s buildings. Examples of such building environment functions that could be improved through automation include: a) building performance monitoring for real-time system control and long-term asset management; and b) assisted indoor navigation for improved accessibility and wayfinding. To enable such autonomy, algorithms related to task allocation, path planning, and navigation are required as fundamental technical capabilities. Existing algorithms in these domains have primarily been developed for outdoor environments. However, key technical challenges that prevent the adoption of such algorithms to indoor environments include: a) the inability of the widely adopted outdoor positioning method (Global Positioning System - GPS) to work indoors; and b) the incompleteness of graph networks formed based on indoor environments due to physical access constraints not encountered outdoors. The objective of this dissertation is to develop general and scalable task allocation, path planning, and navigation algorithms for indoor mobile co-robots that are immune to the aforementioned challenges. The primary contributions of this research are: a) route planning and task allocation algorithms for centrally-located mobile co-robots charged with spatiotemporal tasks in arbitrary built environments; b) path planning algorithms that take preferential and pragmatic constraints (e.g., wheelchair ramps) into consideration to determine optimal accessible paths in building environments; and c) navigation and drift correction algorithms for autonomous mobile robotic data collection in buildings. The developed methods and the resulting computational framework have been validated through several simulated experiments and physical deployments in real building environments. Specifically, a scenario analysis is conducted to compare the performance of existing outdoor methods with the developed approach for indoor multi-robotic task allocation and route planning. A simulated case study is performed along with a pilot experiment in an indoor built environment to test the efficiency of the path planning algorithm and the performance of the assisted navigation interface developed considering people with physical disabilities (i.e., wheelchair users) as building occupants and visitors. Furthermore, a case study is performed to demonstrate the informed retrofit decision-making process with the help of data collected by an intelligent multi-sensor fused robot that is subsequently used in an EnergyPlus simulation. The results demonstrate the feasibility of the proposed methods in a range of applications involving constraints on both the environment (e.g., path obstructions) and robot capabilities (e.g., maximum travel distance on a single charge). By focusing on the technical capabilities required for safe and efficient indoor robot operation, this dissertation contributes to the fundamental science that will make mobile co-robots ubiquitous in building environments in the near future.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143969/1/baddu_1.pd
    • …
    corecore