438 research outputs found

    Sistemas de suporte à condução autónoma adequados a plataforma robótica 4-wheel skid-steer: percepção, movimento e simulação

    Get PDF
    As competições de robótica móvel desempenham papel preponderante na difusão da ciência e da engenharia ao público em geral. E também um espaço dedicado ao ensaio e comparação de diferentes estratégias e abordagens aos diversos desafios da robótica móvel. Uma das vertentes que tem reunido maior interesse nos promotores deste género de iniciativas e entre o público em geral são as competições de condução autónoma. Tipicamente as Competi¸c˜oes de Condução Autónoma (CCA) tentam reproduzir um ambiente semelhante a uma estrutura rodoviária tradicional, no qual sistemas autónomos deverão dar resposta a um conjunto variado de desafios que vão desde a deteção da faixa de rodagem `a interação com distintos elementos que compõem uma estrutura rodoviária típica, do planeamento trajetórias à localização. O objectivo desta dissertação de mestrado visa documentar o processo de desenho e concepção de uma plataforma robótica móvel do tipo 4-wheel skid-steer para realização de tarefas de condução autónoma em ambiente estruturado numa pista que pretende replicar uma via de circulação automóvel dotada de sinalética básica e alguns obstáculos. Paralelamente, a dissertação pretende também fazer uma análise qualitativa entre o processo de simulação e a sua transposição para uma plataforma robótica física. inferir sobre a diferenças de performance e de comportamento.Mobile robotics competitions play an important role in the diffusion of science and engineering to the general public. It is also a space dedicated to test and compare different strategies and approaches to several challenges of mobile robotics. One of the aspects that has attracted more the interest of promoters for this kind of initiatives and general public is the autonomous driving competitions. Typically, Autonomous Driving Competitions (CCAs) attempt to replicate an environment similar to a traditional road structure, in which autonomous systems should respond to a wide variety of challenges ranging from lane detection to interaction with distinct elements that exist in a typical road structure, from planning trajectories to location. The aim of this master’s thesis is to document the process of designing and endow a 4-wheel skid-steer mobile robotic platform to carry out autonomous driving tasks in a structured environment on a track that intends to replicate a motorized roadway including signs and obstacles. In parallel, the dissertation also intends to make a qualitative analysis between the simulation process and the transposition of the developed algorithm to a physical robotic platform, analysing the differences in performance and behavior

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    A Large Scale Inertial Aided Visual Simultaneous Localization And Mapping (SLAM) System For Small Mobile Platforms

    Get PDF
    In this dissertation we present a robust simultaneous mapping and localization scheme that can be deployed on a computationally limited, small unmanned aerial system. This is achieved by developing a key frame based algorithm that leverages the multiprocessing capacity of modern low power mobile processors. The novelty of the algorithm lies in the design to make it robust against rapid exploration while keeping the computational time to a minimum. A novel algorithm is developed where the time critical components of the localization and mapping system are computed in parallel utilizing the multiple cores of the processor. The algorithm uses a scale and rotation invariant state of the art binary descriptor for landmark description making it suitable for compact large scale map representation and robust tracking. This descriptor is also used in loop closure detection making the algorithm efficient by eliminating any need for separate descriptors in a Bag of Words scheme. Effectiveness of the algorithm is demonstrated by performance evaluation in indoor and large scale outdoor dataset. We demonstrate the efficiency and robustness of the algorithm by successful six degree of freedom (6 DOF) pose estimation in challenging indoor and outdoor environment. Performance of the algorithm is validated on a quadcopter with onboard computation

    An Approach for Multi-Robot Opportunistic Coexistence in Shared Space

    Get PDF
    This thesis considers a situation in which multiple robots operate in the same environment towards the achievement of different tasks. In this situation, please consider that not only the tasks, but also the robots themselves are likely be heterogeneous, i.e., different from each other in their morphology, dynamics, sensors, capabilities, etc. As an example, think about a "smart hotel": small wheeled robots are likely to be devoted to cleaning floors, whereas a humanoid robot may be devoted to social interaction, e.g., welcoming guests and providing relevant information to them upon request. Under these conditions, robots are required not only to co-exist, but also to coordinate their activity if we want them to exhibit a coherent and effective behavior: this may range from mutual avoidance to avoid collisions, to a more explicit coordinated behavior, e.g., task assignment or cooperative localization. The issues above have been deeply investigated in the Literature. Among the topics that may play a crucial role to design a successful system, this thesis focuses on the following ones: (i) An integrated approach for path following and obstacle avoidance is applied to unicycle type robots, by extending an existing algorithm [1] initially developed for the single robot case to the multi-robot domain. The approach is based on the definition of the path to be followed as a curve f (x;y) in space, while obstacles are modeled as Gaussian functions that modify the original function, generating a resulting safe path. The attractiveness of this methodology which makes it look very simple, is that it neither requires the computation of a projection of the robot position on the path, nor does it need to consider a moving virtual target to be tracked. The performance of the proposed approach is analyzed by means of a series of experiments performed in dynamic environments with unicycle-type robots by integrating and determining the position of robot using odometry and in Motion capturing environment. (ii) We investigate the problem of multi-robot cooperative localization in dynamic environments. Specifically, we propose an approach where wheeled robots are localized using the monocular camera embedded in the head of a Pepper humanoid robot, to the end of minimizing deviations from their paths and avoiding each other during navigation tasks. Indeed, position estimation requires obtaining a linear relationship between points in the image and points in the world frame: to this end, an Inverse Perspective mapping (IPM) approach has been adopted to transform the acquired image into a bird eye view of the environment. The scenario is made more complex by the fact that Pepper\u2019s head is moving dynamically while tracking the wheeled robots, which requires to consider a different IPM transformation matrix whenever the attitude (Pitch and Yaw) of the camera changes. Finally, the IPM position estimate returned by Pepper is merged with the estimate returned by the odometry of the wheeled robots through an Extened Kalman Filter. Experiments are shown with multiple robots moving along different paths in a shared space, by avoiding each other without onboard sensors, i.e., by relying only on mutual positioning information. Software for implementing the theoretical models described above have been developed in ROS, and validated by performing real experiments with two types of robots, namely: (i) a unicycle wheeled Roomba robot(commercially available all over the world), (ii) Pepper Humanoid robot (commercially available in Japan and B2B model in Europe)

    Mapping and Semantic Perception for Service Robotics

    Get PDF
    Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /

    Embedded visual perception system applied to safe navigation of vehicles

    Get PDF
    Orientadores: Douglas Eduardo Zampieri, Isabelle Fantoni CoichotTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecanicaResumo: Esta tese aborda o problema de evitamento de obstáculos para plataformas terrestres semie autônomas em ambientes dinâmicos e desconhecidos. Baseado num sistema monocular, propõe-se um conjunto de ferramentas que monitoram continuamente a estrada a frente do veículo, provendo-o de informações adequadas em tempo real. A partir de um algoritmo robusto de detecção da linha do horizonte é possível investigar dinamicamente somente a porção da estrada a frente do veículo, a fim de determinar a área de navegação, e da deteção de obstáculos. Uma área de navegação livre de obstáculos é então representa a partir de uma imagem multimodal 2D. Esta representação permite que um nível de segurança possa ser selecionado de acordo com o ambiente e o contexto de operação. A fim de reduzir o custo computacional, um método automático para descarte de imagens é proposto. Levando-se em conta a coerência temporal entre consecutivas imagens, uma nova metodologia de gerenciamento de energia (Dynamic Power Management) é aplicada ao sistema de percepção visual a fim de otimizar o consumo de energia. Estas propostas foram testadas em diferentes tipos de ambientes, e incluem a deteção da área de navegação, navegação reativa e estimação do risco de colisão. Uma característica das metodologias apresentadas é a independência em relação ao sistema de aquisição de imagem e do próprio veículo. Este sistema de percepção em tempo real foi avaliado a partir de diferentes bancos de testes e também a partir de dados reais obtidos por diferentes plataformas inteligentes. Em tarefas realizadas com uma plataforma semi-autônoma, testes foram conduzidos em velocidades acima de 100 Km/h. A partir de um sistema em malha aberta, deslocamentos reativos autônomos foram realizados com sucessoResumé: Les études développées dans ce projet doctoral ont concerné deux problématiques actuelles dans le domaine des systèmes robotiques pour la mobilité terrestre: premièrement, le problème associé à la navigation autonome et (semi)-autonome des véhicules terrestres dans un environnement inconnu ou partiellement connu. Cela constitue un enjeu qui prend de l'importance sur plusieurs fronts, notamment dans le domaine militaire. Récemment, l'agence DARPA1 aux États-Unis a soutenu plusieurs challenges sur cette problématique robotique; deuxièmement, le développement de systèmes d'assistance à la conduite basés sur la vision par ordinateur. Les acteurs de l'industrie automobile s'intéressent de plus en plus au développement de tels systèmes afin de rendre leurs produits plus sûrs et plus confortables à toutes conditions climatiques ou de terrain. De plus, grâce à l'électronique embarquée et à l'utilisation des systèmes visuels, une interaction avec l'environnement est possible, rendant les routes et les villes plus sûres pour les conducteurs et les piétons. L'objectif principal de ce projet doctoral a été le développement de méthodologies qui permettent à des systèmes mobiles robotisés de naviguer de manière autonome dans un environnement inconnu ou partiellement connu, basées sur la perception visuelle fournie par un système de vision monoculaire embarqué. Un véhicule robotisé qui doit effectuer des tâches précises dans un environnement inconnu, doit avoir la faculté de percevoir son environnement proche et avoir un degré minimum d'interaction avec celui-ci. Nous avons proposé un système de vision embarquée préliminaire, où le temps de traitement de l'information (point critique dans des systèmes de vision utilisés en temps-réel) est optimisé par une méthode d'identification et de rejet d'informations redondantes. Suite à ces résultats, on a proposé une étude innovante par rapport à l'état de l'art en ce qui concerne la gestion énergétique du système de vision embarqué, également pour le calcul du temps de collision à partir d'images monoculaires. Ainsi, nous proposons le développement des travaux en étudiant une méthodologie robuste et efficace (utile en temps-réel) pour la détection de la route et l'extraction de primitives d'intérêts appliquée à la navigation autonome des véhicules terrestres. Nous présentons des résultats dans un environnement réel, dynamique et inconnu. Afin d'évaluer la performance de l'algorithme proposé, nous avons utilisé un banc d'essai urbain et réel. Pour la détection de la route et afin d'éviter les obstacles, les résultats sont présents en utilisant un véhicule réel afin d'évaluer la performance de l'algorithme dans un déplacement autonome. Cette Thèse de Doctorat a été réalisée à partir d'un accord de cotutelle entre l' Université de Campinas (UNICAMP) et l'Université de Technologie de Compiègne (UTC), sous la direction du Professeur Docteur Douglas Eduardo ZAMPIERI, Faculté de Génie Mécanique, UNICAMP, Campinas, Brésil, et Docteur Isabelle FANTONI-COICHOT du Laboratoire HEUDIASYC UTC, Compiègne, France. Cette thèse a été soutenue le 26 août 2011 à la Faculté de Génie Mécanique, UNICAMP, devant un jury composé des Professeurs suivantsAbstract: This thesis addresses the problem of obstacle avoidance for semi- and autonomous terrestrial platforms in dynamic and unknown environments. Based on monocular vision, it proposes a set of tools that continuously monitors the way forward, proving appropriate road informations in real time. A horizon finding algorithm was developed to sky removal. This algorithm generates the region of interest from a dynamic threshold search method, allowing to dynamically investigate only a small portion of the image ahead of the vehicle, in order to road and obstacle detection. A free-navigable area is therefore represented from a multimodal 2D drivability road image. This multimodal result enables that a level of safety can be selected according to the environment and operational context. In order to reduce processing time, this thesis also proposes an automatic image discarding criteria. Taking into account the temporal coherence between consecutive frames, a new Dynamic Power Management methodology is proposed and applied to a robotic visual machine perception, which included a new environment observer method to optimize energy consumption used by a visual machine. This proposal was tested in different types of image texture (road surfaces), which includes free-area detection, reactive navigation and time-to-collision estimation. A remarkable characteristic of these methodologies is its independence of the image acquiring system and of the robot itself. This real-time perception system has been evaluated from different test-banks and also from real data obtained by two intelligent platforms. In semi-autonomous tasks, tests were conducted at speeds above 100 Km/h. Autonomous displacements were also carried out successfully. The algorithms presented here showed an interesting robustnessDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem Schlüsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, übertreffen Gehirne von Säugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfähigsten Maschinen. Industrieroboter sind sehr schnell und präzise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie für die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfähig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits überlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen Repräsentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras übertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch für eine vergleichende Studie zu sample-basierten Planern verwendet. Ergänzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewählt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung für dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage für neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg für die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Low-Resolution Vision for Autonomous Mobile Robots

    Get PDF
    The goal of this research is to develop algorithms using low-resolution images to perceive and understand a typical indoor environment and thereby enable a mobile robot to autonomously navigate such an environment. We present techniques for three problems: autonomous exploration, corridor classification, and minimalistic geometric representation of an indoor environment for navigation. First, we present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32X24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction (0.02%) of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage (99.98%) of the information both spatially and temporally, processing occurs at an average of 1000 frames per second, or equivalently takes a small fraction of the CPU. Second, we present an algorithm using image entropy to detect and classify corridor junctions from low resolution images. Because entropy can be used to perceive depth, it can be used to detect an open corridor in a set of images recorded by turning a robot at a junction by 360 degrees. Our algorithm involves detecting peaks from continuously measured entropy values and determining the angular distance between the detected peaks to determine the type of junction that was recorded (either middle, L-junction, T-junction, dead-end, or cross junction). We show that the same algorithm can be used to detect open corridors from both monocular as well as omnidirectional images. Third, we propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that centerline and wall-floor boundaries can be estimated with reasonable accuracy even in texture-poor environments with low-resolution images. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99.9%
    corecore