5,071 research outputs found

    View-Invariant Regions and Mobile Robot Self-Localization

    Get PDF
    This paper addresses the problem of mobile robot self-localization given a polygonal map and a set of observed edge segments. The standard approach to this problem uses interpretation tree search with pruning heuristics to match observed edges to map edges. Our approach introduces a preprocessing step in which the map is decomposed into 'view-invariant regions' (VIRs). The VIR decomposition captures information about map edge visibility, and can be used for a variety of robot navigation tasks. Basing self-localization search on VIRs greatly reduces the branching factor of the search tree and thereby simplifies the search task. In this paper we define the VIR decomposition and give algorithms for its computation and for self-localization search. We present results of simulations comparing standard and VIR-based search, and discuss the application of the VIR decomposition to other problems in robot navigation

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Determining Optimal Locations for Viewpoints Using the Open-Source Whitebox GAT Software

    Get PDF
    The most visually exposed landscape can be determined by a rich set of GIS tools, the main limitation of which is high intensity of computation. This study aims to put forward a method of specifying optimal location for the viewpoints attractive to tourists by means of Whitebox GAT, an open-source GIS application. The study area involves the Kolbudzko-Przywidzka Upland of the southern part of the Kashubian Lakeland in Poland. The method presented herein is characterized by simplicity and low computation intensity. However, it can only be used to analyse views on a local scale, in areas whose spatial coverage does not exceed a dozen or so kilometres

    Influence map-based pathfinding algorithms in video games

    Get PDF
    Path search algorithms, i.e., pathfinding algorithms, are used to solve shortest path problems by intelligent agents, ranging from computer games and applications to robotics. Pathfinding is a particular kind of search, in which the objective is to find a path between two nodes. A node is a point in space where an intelligent agent can travel. Moving agents in physical or virtual worlds is a key part of the simulation of intelligent behavior. If a game agent is not able to navigate through its surrounding environment without avoiding obstacles, it does not seem intelligent. Hence the reason why pathfinding is among the core tasks of AI in computer games. Pathfinding algorithms work well with single agents navigating through an environment. In realtime strategy (RTS) games, potential fields (PF) are used for multi-agent navigation in large and dynamic game environments. On the contrary, influence maps are not used in pathfinding. Influence maps are a spatial reasoning technique that helps bots and players to take decisions about the course of the game. Influence map represent game information, e.g., events and faction power distribution, and is ultimately used to provide game agents knowledge to take strategic or tactical decisions. Strategic decisions are based on achieving an overall goal, e.g., capture an enemy location and win the game. Tactical decisions are based on small and precise actions, e.g., where to install a turret, where to hide from the enemy. This dissertation work focuses on a novel path search method, that combines the state-of-theart pathfinding algorithms with influence maps in order to achieve better time performance and less memory space performance as well as more smooth paths in pathfinding.Algoritmos de pathfinding são usados por agentes inteligentes para resolver o problema do caminho mais curto, desde a àrea jogos de computador até à robótica. Pathfinding é um tipo particular de algoritmos de pesquisa, em que o objectivo é encontrar o caminho mais curto entre dois nós. Um nó é um ponto no espaço onde um agente inteligente consegue navegar. Agentes móveis em mundos físicos e virtuais são uma componente chave para a simulação de comportamento inteligente. Se um agente não for capaz de navegar no ambiente que o rodeia sem colidir com obstáculos, não aparenta ser inteligente. Consequentemente, pathfinding faz parte das tarefas fundamentais de inteligencia artificial em vídeo jogos. Algoritmos de pathfinding funcionam bem com agentes únicos a navegar por um ambiente. Em jogos de estratégia em tempo real (RTS), potential fields (PF) são utilizados para a navegação multi-agente em ambientes amplos e dinâmicos. Pelo contrário, os influence maps não são usados no pathfinding. Influence maps são uma técnica de raciocínio espacial que ajudam agentes inteligentes e jogadores a tomar decisões sobre o decorrer do jogo. Influence maps representam informação de jogo, por exemplo, eventos e distribuição de poder, que são usados para fornecer conhecimento aos agentes na tomada de decisões estratégicas ou táticas. As decisões estratégicas são baseadas em atingir uma meta global, por exemplo, a captura de uma zona do inimigo e ganhar o jogo. Decisões táticas são baseadas em acções pequenas e precisas, por exemplo, em que local instalar uma torre de defesa, ou onde se esconder do inimigo. Esta dissertação foca-se numa nova técnica que consiste em combinar algoritmos de pathfinding com influence maps, afim de alcançar melhores performances a nível de tempo de pesquisa e consumo de memória, assim como obter caminhos visualmente mais suaves

    Percepción Activa multi-robot para la cobertura rápida y eficiente de escenas.

    Get PDF
    The efficiency of path-planning in robot navigation is crucial in tasks, such as search-and-rescue and disaster surveying, but this is emphasised even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this Master Thesis proposes an efficient, hierarchical planner to achieve a comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map in a set of tasks, we are able to generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach to an alternative state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.<br /
    • …
    corecore