410 research outputs found

    Fusing Multiple Multiband Images

    Full text link
    We consider the problem of fusing an arbitrary number of multiband, i.e., panchromatic, multispectral, or hyperspectral, images belonging to the same scene. We use the well-known forward observation and linear mixture models with Gaussian perturbations to formulate the maximum-likelihood estimator of the endmember abundance matrix of the fused image. We calculate the Fisher information matrix for this estimator and examine the conditions for the uniqueness of the estimator. We use a vector total-variation penalty term together with nonnegativity and sum-to-one constraints on the endmember abundances to regularize the derived maximum-likelihood estimation problem. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. We solve the resultant convex optimization problem using the alternating direction method of multipliers. We utilize the circular convolution theorem in conjunction with the fast Fourier transform to alleviate the computational complexity of the proposed algorithm. Experiments with multiband images constructed from real hyperspectral datasets reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images

    Hierarchical fusion using vector quantization for visualization of hyperspectral images

    Get PDF
    Visualization of hyperspectral images that combines the data from multiple sensors is a major challenge due to huge data set. An efficient image fusion could be a primary key step for this task. To make the approach computationally efficient and to accommodate a large number of image bands, we propose a hierarchical fusion based on vector quantization and bilateral filtering. The consecutive image bands in the hyperspectral data cube exhibit a high degree of feature similarity among them due to the contiguous and narrow nature of the hyperspectral sensors. Exploiting this redundancy in the data, we fuse neighboring images at every level of hierarchy. As at the first level, the redundancy between the images is very high we use a powerful compression tool, vector quantization, to fuse each group. From second level onwards, each group is fused using bilateral filtering. While vector quantization removes redundancy, bilateral filter retains even the minor details that exist in individual image. The hierarchical fusion scheme helps in accommodating a large number of hyperspectral image bands. It also facilitates the midband visualization of a subset of the hyperspectral image cube. Quantitative performance analysis shows the effectiveness of the proposed method

    Robust fusion of multi-band images with different spatial and spectral resolutions for change detection

    Get PDF
    Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through different kinds of sensors. More precisely, this paper addresses the problem of detecting changes between two multiband optical images characterized by different spatial and spectral resolutions. This sensor dissimilarity introduces additional issues in the context of operational change detection. To alleviate these issues, classical change detection methods are applied after independent preprocessing steps (e.g., resampling) used to get the same spatial and spectral resolutions for the pair of observed images. Nevertheless, these preprocessing steps tend to throw away relevant information. Conversely, in this paper, we propose a method that more effectively uses the available information by modeling the two observed images as spatial and spectral versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions. As they cover the same scene, these latent images are expected to be globally similar except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. This robust fusion problem is formulated as an inverse problem, which is iteratively solved using an efficient block-coordinate descent algorithm. The proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy

    Real-Time Full Color Multiband Night Vision

    Get PDF

    High resolution whole brain diffusion imaging at 7 T for the Human Connectome Project

    Get PDF
    Mapping structural connectivity in healthy adults for the Human Connectome Project (HCP) benefits from high quality, high resolution, multiband (MB)-accelerated whole brain diffusion MRI (dMRI). Acquiring such data at ultrahigh fields (7 T and above) can improve intrinsic signal-to-noise ratio (SNR), but suffers from shorter T2 and T2⁎ relaxation times, increased B1+ inhomogeneity (resulting in signal loss in cerebellar and temporal lobe regions), and increased power deposition (i.e. specific absorption rate (SAR)), thereby limiting our ability to reduce the repetition time (TR). Here, we present recent developments and optimizations in 7 T image acquisitions for the HCP that allow us to efficiently obtain high quality, high resolution whole brain in-vivo dMRI data at 7 T. These data show spatial details typically seen only in ex-vivo studies and complement already very high quality 3 T HCP data in the same subjects. The advances are the result of intensive pilot studies aimed at mitigating the limitations of dMRI at 7 T. The data quality and methods described here are representative of the datasets that will be made freely available to the community in 2015

    Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding

    Full text link
    Classifying single image patches is important in many different applications, such as road detection or scene understanding. In this paper, we present convolutional patch networks, which are convolutional networks learned to distinguish different image patches and which can be used for pixel-wise labeling. We also show how to incorporate spatial information of the patch as an input to the network, which allows for learning spatial priors for certain categories jointly with an appearance model. In particular, we focus on road detection and urban scene understanding, two application areas where we are able to achieve state-of-the-art results on the KITTI as well as on the LabelMeFacade dataset. Furthermore, our paper offers a guideline for people working in the area and desperately wandering through all the painstaking details that render training CNs on image patches extremely difficult.Comment: VISAPP 2015 pape

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references
    corecore