2,282 research outputs found

    Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps

    Full text link
    Hyperspectral cameras can provide unique spectral signatures for consistently distinguishing materials that can be used to solve surveillance tasks. In this paper, we propose a novel real-time hyperspectral likelihood maps-aided tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving object tracking system generally consists of registration, object detection, and tracking modules. We focus on the target detection part and remove the necessity to build any offline classifiers and tune a large amount of hyperparameters, instead learning a generative target model in an online manner for hyperspectral channels ranging from visible to infrared wavelengths. The key idea is that, our adaptive fusion method can combine likelihood maps from multiple bands of hyperspectral imagery into one single more distinctive representation increasing the margin between mean value of foreground and background pixels in the fused map. Experimental results show that the HLT not only outperforms all established fusion methods but is on par with the current state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and Pattern Recognition Workshops, 201

    Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net

    Full text link
    Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS/HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS/HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    An Analysis of multimodal sensor fusion for target detection in an urban environment

    Get PDF
    This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances

    Fusion of multispectral and hyperspectral images based on sparse representation

    Get PDF
    National audienceThis paper presents an algorithm based on sparse representation for fusing hyperspectral and multispectral images. The observed images are assumed to be obtained by spectral or spatial degradations of the high resolution hyperspectral image to be recovered. Based on this forward model, the fusion process is formulated as an inverse problem whose solution is determined by optimizing an appropriate criterion. To incorporate additional spatial information within the objective criterion, a regularization term is carefully designed,relying on a sparse decomposition of the scene on a set of dictionaryies. The dictionaries and the corresponding supports of active coding coef�cients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved by iteratively optimizing with respect to the target image (using the alternating direction method of multipliers) and the coding coefcients. Simulation results demonstrate the ef�ciency of the proposed fusion method when compared with the state-of-the-art
    corecore