5 research outputs found

    Localization for mobile robots using skyline in panoramic images

    Get PDF
    Advances in robotics liberated robots from factory floors by the end of 20th century. Use of robots in our daily lives is only expected to increase in time. Robots, while relieving us of the burden of tedious, hard and dangerous tasks, most of them are still expected to be territorial, i.e. they will operate in a predefined or rather bounded environment. For enhanced performance, a robot should be familiar with its territory. In this work, use of skylines extracted from panoramic images is studied in order to provide localization. To achieve localization, first a special map is prepared using a series of panoramic images taken from the territory. Images, those taken from the vicinity of this map at different times and under different weather conditions, are localized on this map. Given that the proposed method only focuses on a small portion of the panoramic image, it is relatively easier to implement in various hardware platforms (especially those with limited computational capabilities) in comparison to the methods that process the whole image

    Stereoscopic Vision in Unmanned Aerial Vehicle Search and Rescue

    Get PDF
    Search and rescue operations are challenging due to the hazards imposed on the rescue teams. Team ARM IT has developed a virtual reality interface that controls a mounted camera payload on an unmanned aerial vehicle (UAV) through a head mounted display. This allows rescuers to manipulate a UAV to assist search and rescue missions safely and effectively through telepresence and enhanced situational awareness. The team tested these hypotheses by prototyping, testing, and refining individual components of the system through the use of flight simulation software and on-site volunteer testing. By providing a realistic sense of the UAV environment enhanced with relevant information, Team ARM IT’s project reduces the danger to the rescuers and provide cognitively natural situational awareness

    Appearance and Geometry Assisted Visual Navigation in Urban Areas

    Get PDF
    Navigation is a fundamental task for mobile robots in applications such as exploration, surveillance, and search and rescue. The task involves solving the simultaneous localization and mapping (SLAM) problem, where a map of the environment is constructed. In order for this map to be useful for a given application, a suitable scene representation needs to be defined that allows spatial information sharing between robots and also between humans and robots. High-level scene representations have the benefit of being more robust and having higher exchangeability for interpretation. With the aim of higher level scene representation, in this work we explore high-level landmarks and their usage using geometric and appearance information to assist mobile robot navigation in urban areas. In visual SLAM, image registration is a key problem. While feature-based methods such as scale-invariant feature transform (SIFT) matching are popular, they do not utilize appearance information as a whole and will suffer from low-resolution images. We study appearance-based methods and propose a scale-space integrated Lucas-Kanade’s method that can estimate geometric transformations and also take into account image appearance with different resolutions. We compare our method against state-of-the-art methods and show that our method can register images efficiently with high accuracy. In urban areas, planar building facades (PBFs) are basic components of the quasirectilinear environment. Hence, segmentation and mapping of PBFs can increase a robot’s abilities of scene understanding and localization. We propose a vision-based PBF segmentation and mapping technique that combines both appearance and geometric constraints to segment out planar regions. Then, geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints are used in an optimization process to improve the mapping of PBFs. A major issue in monocular visual SLAM is scale drift. While depth sensors, such as lidar, are free from scale drift, this type of sensors are usually more expensive compared to cameras. To enable low-cost mobile robots equipped with monocular cameras to obtain accurate position information, we use a 2D lidar map to rectify imprecise visual SLAM results using planar structures. We propose a two-step optimization approach assisted by a penalty function to improve on low-quality local minima results. Robot paths for navigation can be either automatically generated by a motion planning algorithm or provided by a human. In both cases, a scene representation of the environment, i.e., a map, is useful to specify meaningful tasks for the robot. However, SLAM results usually produce a sparse scene representation that consists of low-level landmarks, such as point clouds, which are neither convenient nor intuitive to use for task specification. We present a system that allows users to program mobile robots using high-level landmarks from appearance data

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams
    corecore