45 research outputs found

    Methods for multi-spectral image fusion: identifying stable and repeatable information across the visible and infrared spectra

    Get PDF
    Fusion of images captured from different viewpoints is a well-known challenge in computer vision with many established approaches and applications; however, if the observations are captured by sensors also separated by wavelength, this challenge is compounded significantly. This dissertation presents an investigation into the fusion of visible and thermal image information from two front-facing sensors mounted side-by-side. The primary focus of this work is the development of methods that enable us to map and overlay multi-spectral information; the goal is to establish a combined image in which each pixel contains both colour and thermal information. Pixel-level fusion of these distinct modalities is approached using computational stereo methods; the focus is on the viewpoint alignment and correspondence search/matching stages of processing. Frequency domain analysis is performed using a method called phase congruency. An extensive investigation of this method is carried out with two major objectives: to identify predictable relationships between the elements extracted from each modality, and to establish a stable representation of the common information captured by both sensors. Phase congruency is shown to be a stable edge detector and repeatable spatial similarity measure for multi-spectral information; this result forms the basis for the methods developed in the subsequent chapters of this work. The feasibility of automatic alignment with sparse feature-correspondence methods is investigated. It is found that conventional methods fail to match inter-spectrum correspondences, motivating the development of an edge orientation histogram (EOH) descriptor which incorporates elements of the phase congruency process. A cost function, which incorporates the outputs of the phase congruency process and the mutual information similarity measure, is developed for computational stereo correspondence matching. An evaluation of the proposed cost function shows it to be an effective similarity measure for multi-spectral information

    PHROG: A Multimodal Feature for Place Recognition

    Get PDF
    International audienceLong-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR)

    Pedestrian detection in far infrared images

    Get PDF
    Detection of people in images is a relatively new field of research, but has been widely accepted. The applications are multiple, such as self-labeling of large databases, security systems and pedestrian detection in intelligent transportation systems. Within the latter, the purpose of a pedestrian detector from a moving vehicle is to detect the presence of people in the path of the vehicle. The ultimate goal is to avoid a collision between the two. This thesis is framed with the advanced driver assistance systems, passive safety systems that warn the driver of conditions that may be adverse. An advanced driving assistance system module, aimed to warn the driver about the presence of pedestrians, using computer vision in thermal images, is presented in this thesis. Such sensors are particularly useful under conditions of low illumination.The document is divided following the usual parts of a pedestrian detection system: development of descriptors that define the appearance of people in these kind of images, the application of these descriptors to full-sized images and temporal tracking of pedestrians found. As part of the work developed in this thesis, database of pedestrians in the far infrared spectrum is presented. This database has been used in developing an evaluation of pedestrian detection systems as well as for the development of new descriptors. These descriptors use techniques for the systematic description of the shape of the pedestrian as well as methods to achieve invariance to contrast, illumination or ambient temperature. The descriptors are analyzed and modified to improve their performance in a detection problem, where potential candidates are searched for in full size images. Finally, a method for tracking the detected pedestrians is proposed to reduce the number of miss-detections that occurred at earlier stages of the algorithm. --La detección de personas en imágenes es un campo de investigación relativamente nuevo, pero que ha tenido una amplia acogida. Las aplicaciones son múltiples, tales como auto-etiquetado de grandes bases de datos, sistemas de seguridad y detección de peatones en sistemas inteligentes de transporte. Dentro de este último, la detección de peatones desde un vehículo móvil tiene como objetivo detectar la presencia de personas en la trayectoria del vehículo. EL fin último es evitar una colisión entre ambos. Esta tesis se enmarca en los sistemas avanzados de ayuda a la conducción; sistemas de seguridad pasivos, que advierten al conductor de condiciones que pueden ser adversas. En esta tesis se presenta un módulo de ayuda a la conducción destinado a advertir de la presencia de peatones, mediante el uso de visión por computador en imágenes térmicas. Este tipo de sensores resultan especialmente útiles en condiciones de baja iluminación. El documento se divide siguiendo las partes habituales de una sistema de detección de peatones: desarrollo de descriptores que defina la apariencia de las personas en este tipo de imágenes, la aplicación de estos en imágenes de tamano completo y el seguimiento temporal de los peatones encontrados. Como parte del trabajo desarrollado en esta tesis se presenta una base de datos de peatones en el espectro infrarrojo lejano. Esta base de datos ha sido utilizada para desarrollar una evaluación de sistemas de detección de peatones, así como para el desarrollo de nuevos descriptores. Estos integran técnicas para la descripción sistemática de la forma del peatón, así como métodos para la invariancia al contraste, la iluminación o la temperatura externa. Los descriptores son analizados y modificados para mejorar su rendimiento en un problema de detección, donde se buscan posibles candidatos en una imagen de tamano completo. Finalmente, se propone una método de seguimiento de los peatones detectados para reducir el número de fallos que se hayan producido etapas anteriores del algoritmo

    Online Mutual Foreground Segmentation for Multispectral Stereo Videos

    Full text link
    The segmentation of video sequences into foreground and background regions is a low-level process commonly used in video content analysis and smart surveillance applications. Using a multispectral camera setup can improve this process by providing more diverse data to help identify objects despite adverse imaging conditions. The registration of several data sources is however not trivial if the appearance of objects produced by each sensor differs substantially. This problem is further complicated when parallax effects cannot be ignored when using close-range stereo pairs. In this work, we present a new method to simultaneously tackle multispectral segmentation and stereo registration. Using an iterative procedure, we estimate the labeling result for one problem using the provisional result of the other. Our approach is based on the alternating minimization of two energy functions that are linked through the use of dynamic priors. We rely on the integration of shape and appearance cues to find proper multispectral correspondences, and to properly segment objects in low contrast regions. We also formulate our model as a frame processing pipeline using higher order terms to improve the temporal coherence of our results. Our method is evaluated under different configurations on multiple multispectral datasets, and our implementation is available online.Comment: Preprint accepted for publication in IJCV (December 2018

    Robust multispectral image-based localisation solutions for autonomous systems

    Get PDF
    With the recent increase of interest in multispectral imaging, new image-based localisation solutions have emerged. However, its application to visual odometry remains overlooked. Most localisation techniques are still being developed with visible cameras only, because the portability they can offer and the wide variety of cameras available. Yet, other modalities have great potentials for navigation purposes. Infrared imaging for example, provides different information about the scene and is already used to enhance visible images. This is especially the case of far-infrared cameras which can produce images at night and see hot objects like other cars, animals or pedestrians. Therefore, the aim of this thesis is to tackle the lack of research in multispectral localisation and to explore new ways of performing visual odometry accurately with visible and thermal images. First, a new calibration pattern made of LED lights is presented in Chapter 3. Emitting both visible and thermal radiations, it can easily be seen by infrared and visible cameras. Due to its peculiar shape, the whole pattern can be moved around the cameras and automatically identified in the different images recorded. Monocular and stereo calibration are then performed to precisely estimate the camera parameters. Then, a multispectral monocular visual odometry algorithm is proposed in Chapter 4. This generic technique is able to operate in infrared and visible modalities, regardless of the nature of the images. Incoming images are processed at a high frame rate based on a 2D-to-2D unscaled motion estimation method. However, specific keyframes are carefully selected to avoid degenerate cases and a bundle adjustment optimisation is performed on a sliding window to refine the initial estimation. The advantage of visible-thermal odometry is shown on a scenario with extreme illumination conditions, where the limitation of each modality is reached. The simultaneous combination of visible and thermal images for visual odometry is also explored. In Chapter 5, two feature matching techniques are presented and tested in a multispectral stereo visual odometry framework. One method matches features between stereo pairs independently while the other estimates unscaled motion first, before matching the features altogether. Even though these techniques require more processing power to overcome the dissimilarities between V multimodal images, they have the benefit of estimating scaled transformations. Finally, the camera pose estimates obtained with multispectral stereo odometry are fused with inertial data to create a robustified localisation solution which is detailed in Chapter 6. The full state of the system is estimated, including position, velocity, orientation and IMU biases. It is shown that multispectral visual odometry can correct drifting IMU measurements effectively. Furthermore, it is demonstrated that such multi-sensors setups can be beneficial in challenging situations where features cannot be extracted or tracked. In that case, inertial data can be integrated to provide a state estimate while visual odometry cannot

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    Thermal Cameras and Applications:A Survey

    Get PDF

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study
    corecore