805 research outputs found

    Color constancy for landmark detection in outdoor environments

    Get PDF
    European Workshop on Advanced Mobile Robots (EUROBOT), 2001, Lund (Suecia)This work presents an evaluation of three color constancy techniques applied to a landmark detection system designed for a walking robot, which has to operate in unknown and unstructured outdoor environments. The first technique is the well-known image conversion to a chromaticity space, and the second technique is based on successive lighting intensity and illuminant color normalizations. Based on a differential model of color constancy, we propose the third technique, based on color ratios, which unifies the processes of color constancy and landmark detection. The approach used to detect potential landmarks, which is common to all evaluated systems, is based on visual saliency concepts using multiscale color opponent features to identify salient regions in the images. These regions are selected as landmark candidates, and they are further characterized by their features for identification and recognition.This work was supported by the project 'Navegación autónoma de robots guiados por objetivos visuales' (070-720).Peer Reviewe

    Learning Matchable Image Transformations for Long-term Metric Visual Localization

    Full text link
    Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations. While experience-based mapping has proven to be an effective technique for bridging the `appearance gap,' the number of experiences required for reliable metric localization over days or months can be very large, and methods for reducing the necessary number of experiences are needed for this approach to scale. Taking inspiration from color constancy theory, we learn a nonlinear RGB-to-grayscale mapping that explicitly maximizes the number of inlier feature matches for images captured under different lighting and weather conditions, and use it as a pre-processing step in a conventional single-experience localization pipeline to improve its robustness to appearance change. We train this mapping by approximating the target non-differentiable localization pipeline with a deep neural network, and find that incorporating a learned low-dimensional context feature can further improve cross-appearance feature matching. Using synthetic and real-world datasets, we demonstrate substantial improvements in localization performance across day-night cycles, enabling continuous metric localization over a 30-hour period using a single mapping experience, and allowing experience-based localization to scale to long deployments with dramatically reduced data requirements.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the IEEE International Conference on Robotics and Automation (ICRA'20), Paris, France, May 31-June 4, 202

    Visual road following using intrinsic images

    Get PDF
    We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Color-contrast landmark detection and encoding in outdoor images

    Get PDF
    International Conference on Computer Analysis of Images and Patterns (CAIP), 2005, Versalles (Francia)This paper describes a system to extract salient regions from an outdoor image and match them against a database of previously acquired landmarks. Region saliency is based mainly on color contrast, although intensity and texture orientation are also taken into account. Remarkably, color constancy is embedded in the saliency detection process through a novel color ratio algorithm that makes the system robust to illumination changes, so common in outdoor environments. A region is characterized by a combination of its saliency and its color distribution in chromaticity space. The newly acquired landmarks are compared with those already stored in a database, through a quadratic distance metric of their characterizations. Experimentation with a database containing 68 natural landmarks acquired with the system yielded good recognition results, in terms of both recall and rank indices. However, the discrimination between landmarks should be improved to avoid false positives, as suggested by the low precision index.This work was supported by the project 'Sistema reconfigurable para la navegación basada en visión de robots caminantes y rodantes en entornos naturales.' (00).Peer Reviewe

    Ubiquitous Positioning: A Taxonomy for Location Determination on Mobile Navigation System

    Full text link
    The location determination in obstructed area can be very challenging especially if Global Positioning System are blocked. Users will find it difficult to navigate directly on-site in such condition, especially indoor car park lot or obstructed environment. Sometimes, it needs to combine with other sensors and positioning methods in order to determine the location with more intelligent, reliable and ubiquity. By using ubiquitous positioning in mobile navigation system, it is a promising ubiquitous location technique in a mobile phone since as it is a familiar personal electronic device for many people. However, as research on ubiquitous positioning systems goes beyond basic methods there is an increasing need for better comparison of proposed ubiquitous positioning systems. System developers are also lacking of good frameworks for understanding different options during building ubiquitous positioning systems. This paper proposes taxonomy to address both of these problems. The proposed taxonomy has been constructed from a literature study of papers and articles on positioning estimation that can be used to determine location everywhere on mobile navigation system. For researchers the taxonomy can also be used as an aid for scoping out future research in the area of ubiquitous positioning.Comment: 15 Pages, 3 figure

    Understanding a Dynamic World: Dynamic Motion Estimation for Autonomous Driving Using LIDAR

    Full text link
    In a society that is heavily reliant on personal transportation, autonomous vehicles present an increasingly intriguing technology. They have the potential to save lives, promote efficiency, and enable mobility. However, before this vision becomes a reality, there are a number of challenges that must be solved. One key challenge involves problems in dynamic motion estimation, as it is critical for an autonomous vehicle to have an understanding of the dynamics in its environment for it to operate safely on the road. Accordingly, this thesis presents several algorithms for dynamic motion estimation for autonomous vehicles. We focus on methods using light detection and ranging (LIDAR), a prevalent sensing modality used by autonomous vehicle platforms, due to its advantages over other sensors, such as cameras, including lighting invariance and fidelity of 3D geometric data. First, we propose a dynamic object tracking algorithm. The proposed method takes as input a stream of LIDAR data from a moving object collected by a multi-sensor platform. It generates an estimate of its trajectory over time and a point cloud model of its shape. We formulate the problem similarly to simultaneous localization and mapping (SLAM), allowing us to leverage existing techniques. Unlike prior work, we properly handle a stream of sensor measurements observed over time by deriving our algorithm using a continuous-time estimation framework. We evaluate our proposed method on a real-world dataset that we collect. Second, we present a method for scene flow estimation from a stream of LIDAR data. Inspired by optical flow and scene flow from the computer vision community, our framework can estimate dynamic motion in the scene without relying on segmentation and data association while still rivaling the results of state-of-the-art object tracking methods. We design our algorithms to exploit a graphics processing unit (GPU), enabling real-time performance. Third, we leverage deep learning tools to build a feature learning framework that allows us to train an encoding network to estimate features from a LIDAR occupancy grid. The learned feature space describes the geometric and semantic structure of any location observed by the LIDAR data. We formulate the training process so that distances in this learned feature space are meaningful in comparing the similarity of different locations. Accordingly, we demonstrate that using this feature space improves our estimate of the dynamic motion in the environment over time. In summary, this thesis presents three methods to aid in understanding a dynamic world for autonomous vehicle applications with LIDAR. These methods include a novel object tracking algorithm, a real-time scene flow estimation method, and a feature learning framework to aid in dynamic motion estimation. Furthermore, we demonstrate the performance of all our proposed methods on a collection of real-world datasets.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147587/1/aushani_1.pd

    Electronic Image Stabilization for Mobile Robotic Vision Systems

    Get PDF
    When a camera is affixed on a dynamic mobile robot, image stabilization is the first step towards more complex analysis on the video feed. This thesis presents a novel electronic image stabilization (EIS) algorithm for small inexpensive highly dynamic mobile robotic platforms with onboard camera systems. The algorithm combines optical flow motion parameter estimation with angular rate data provided by a strapdown inertial measurement unit (IMU). A discrete Kalman filter in feedforward configuration is used for optimal fusion of the two data sources. Performance evaluations are conducted by a simulated video truth model (capturing the effects of image translation, rotation, blurring, and moving objects), and live test data. Live data was collected from a camera and IMU affixed to the DAGSI Whegs™ mobile robotic platform as it navigated through a hallway. Template matching, feature detection, optical flow, and inertial measurement techniques are compared and analyzed to determine the most suitable algorithm for this specific type of image stabilization. Pyramidal Lucas-Kanade optical flow using Shi-Tomasi good features in combination with inertial measurement is the EIS algorithm found to be superior. In the presence of moving objects, fusion of inertial measurement reduces optical flow root-mean-squared (RMS) error in motion parameter estimates by 40%. No previous image stabilization algorithm to date directly fuses optical flow estimation with inertial measurement by way of Kalman filtering
    corecore