710 research outputs found

    Data Driven Multispectral Image Registration Framework

    Get PDF
    Multispectral imaging is widely used in remote sensing applications from UAVs and ground-based platforms. Multispectral cameras often use a physically different camera for each wavelength causing misalignment in the images for different imaging bands. This misalignment must be corrected prior to concurrent multi-band image analysis. The traditional approach for multispectral image registration process is to select a target channel and register all other image channels to the target. There is no objective evidence-based method to select a target channel. The possibility of registration to some intermediate channel before registering to the target is not usually considered, but could be beneficial if there is no target channel for which direct registration performs well for every other channel. In this paper, we propose an automatic data-driven multispectral image registration framework that determines a target channel, and possible intermediate registration steps based on the assumptions that 1) some reasonable minimum number of control-points correspondences between two channels is needed to ensure a low-error registration; 2) a greater number of such correspondences generally results in higher registration performance. Our prototype is tested on five multispectral datasets captured with UAV-mounted multispectral cameras. The output of the prototype is a registration scheme in the form of a directed acyclic graph (actually a tree) that represents the target channel and the process to register other image channels. The resulting registration schemes had more control point correspondences on average than the traditional register-all-to-one-targetchannel approach. Data-driven registration scheme consistently showed low back-projection error across all the image channel pairs in most of the experiments. Our data-driven framework has generated registration schemes with the best control point extraction algorithm for each image channel pair and registering images in a data-driven approach. The data-driven image registration framework is dataset independent, and it performs on datasets with any number of image channels. With the growing need of remote sensing and the lack of a proper evidence-based method to register multispectral image channels, a data-driven registration framework is an essential tool in the field of image registration and multispectral imaging

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    PHROG: A Multimodal Feature for Place Recognition

    Get PDF
    International audienceLong-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR)

    Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials

    Get PDF
    It is critical for defense and security applications to have a high probability of detection and low false alarm rate while operating over a wide variety of conditions. Sensor fusion, which is the the process of combining data from two or more sensors, has been utilized to improve the performance of a system by exploiting the strengths of each sensor. This dissertation presents algorithms to fuse multi-sensor data that improves system performance by increasing detection rates, lowering false alarms, and improving track performance. Furthermore, this dissertation presents a framework for comparing algorithm error for image registration which is a critical pre-processing step for multi-spectral image fusion. First, I present an algorithm to improve detection and tracking performance for moving targets in a cluttered urban environment by fusing foreground maps from multi-spectral imagery. Most research in image fusion consider visible and long-wave infrared bands; I examine these bands along with near infrared and mid-wave infrared. To localize and track a particular target of interest, I present an algorithm to fuse output from the multi-spectral image tracker with a constellation of RF sensors measuring a specific cellular emanation. The fusion algorithm matches the Doppler differential from the RF sensors with the theoretical Doppler Differential of the video tracker output by selecting the sensor pair that minimizes the absolute difference or root-mean-square difference. Finally, a framework to quantify shift-estimation error for both area- and feature-based algorithms is presented. By exploiting synthetically generated visible and long-wave infrared imagery, error metrics are computed and compared for a number of area- and feature-based shift estimation algorithms. A number of key results are presented in this dissertation. The multi-spectral image tracker improves the location accuracy of the algorithm while improving the detection rate and lowering false alarms for most spectral bands. All 12 moving targets were tracked through the video sequence with only one lost track that was later recovered. Targets from the multi-spectral tracking algorithm were correctly associated with their corresponding cellular emanation for all targets at lower measurement uncertainty using the root-mean-square difference while also having a high confidence ratio for selecting the true target from background targets. For the area-based algorithms and the synthetic air-field image pair, the DFT and ECC algorithms produces sub-pixel shift-estimation error in regions such as shadows and high contrast painted line regions. The edge orientation feature descriptors increase the number of sub-field estimates while improving the shift-estimation error compared to the Lowe descriptor

    Methods for multi-spectral image fusion: identifying stable and repeatable information across the visible and infrared spectra

    Get PDF
    Fusion of images captured from different viewpoints is a well-known challenge in computer vision with many established approaches and applications; however, if the observations are captured by sensors also separated by wavelength, this challenge is compounded significantly. This dissertation presents an investigation into the fusion of visible and thermal image information from two front-facing sensors mounted side-by-side. The primary focus of this work is the development of methods that enable us to map and overlay multi-spectral information; the goal is to establish a combined image in which each pixel contains both colour and thermal information. Pixel-level fusion of these distinct modalities is approached using computational stereo methods; the focus is on the viewpoint alignment and correspondence search/matching stages of processing. Frequency domain analysis is performed using a method called phase congruency. An extensive investigation of this method is carried out with two major objectives: to identify predictable relationships between the elements extracted from each modality, and to establish a stable representation of the common information captured by both sensors. Phase congruency is shown to be a stable edge detector and repeatable spatial similarity measure for multi-spectral information; this result forms the basis for the methods developed in the subsequent chapters of this work. The feasibility of automatic alignment with sparse feature-correspondence methods is investigated. It is found that conventional methods fail to match inter-spectrum correspondences, motivating the development of an edge orientation histogram (EOH) descriptor which incorporates elements of the phase congruency process. A cost function, which incorporates the outputs of the phase congruency process and the mutual information similarity measure, is developed for computational stereo correspondence matching. An evaluation of the proposed cost function shows it to be an effective similarity measure for multi-spectral information

    Käytön arvioimiseksi digitaalisen kuvantamisen arvioida kasvuun nuorten maniokki

    Get PDF
    The world population is growing and is expected to reach over 9 billion in about 30 years. Climate change is also widely expected to worsen famines in certain regions of the world. This will drastically increase global food demand. Food security efforts should be therefore be geared towards promoting food crops that can thrive in these regions and can withstand the condition likely to be brought about by changing climate. Cassava is a typical example of such a crop. This study investigated the use of digital images to estimate growth parameters of young cassava plants. Cassava was cultivated in pots at the University of Helsinki greenhouse at Viikki. The plants were given different water level (100%, 60% and 30% saturation) and potassium (0.1, 1.0, 4.0, 16.0 and 32.0mM) treatments. Digital red-green-blue (RGB) and multispectral images were taken every other week for 5 consecutive times. The images were processed to obtain leaf area, Normalized Difference Vegetation Index (NDVI), and Crop Senescence Index (CSI) and correlated with directly measured growth parameters of the young cassava crops. It was observed that leaf area that was computed from images, and NDVI which was computed from the multispectral images have significant positive correlations with the growth parameters, ie, actual leaf area, chlorophyll content, and plant biomass. CSI however showed weak a correlation between the growth parameters of the young cassava plants. Images leaf area and NDVI were then used to identify the changes in the effects of the water and potassium treatments
    • …
    corecore