11,211 research outputs found

    Visible and NIR Image Fusion Algorithm Based on Information Complementarity

    Full text link
    Visible and near-infrared(NIR) band sensors provide images that capture complementary spectral radiations from a scene. And the fusion of the visible and NIR image aims at utilizing their spectrum properties to enhance image quality. However, currently visible and NIR fusion algorithms cannot well take advantage of spectrum properties, as well as lack information complementarity, which results in color distortion and artifacts. Therefore, this paper designs a complementary fusion model from the level of physical signals. First, in order to distinguish between noise and useful information, we use two layers of the weight-guided filter and guided filter to obtain texture and edge layers, respectively. Second, to generate the initial visible-NIR complementarity weight map, the difference maps of visible and NIR are filtered by the extend-DoG filter. After that, the significant region of NIR night-time compensation guides the initial complementarity weight map by the arctanI function. Finally, the fusion images can be generated by the complementarity weight maps of visible and NIR images, respectively. The experimental results demonstrate that the proposed algorithm can not only well take advantage of the spectrum properties and the information complementarity, but also avoid color unnatural while maintaining naturalness, which outperforms the state-of-the-art

    Fast filtering image fusion

    Full text link
    © 2017 SPIE and IS & T. Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the multimodal image gradient magnitude and is filtered by a fast structure-preserving filter. Finally, the fused image is composed by using a weighed-sum rule. Experimental results on several groups of images show that the proposed fast fusion method has a better performance than the state-of-the-art methods, running up to four times faster than the fastest baseline algorithm

    A mask-based approach for the geometric calibration of thermal-infrared cameras

    Get PDF
    Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site

    A Low-cost Depth Imaging Mobile Platform for Canola Phenotyping

    Get PDF
    To meet the high demand for supporting and accelerating progress in the breeding of novel traits, plant scientists and breeders have to measure a large number of plants and their characteristics accurately. A variety of imaging methodologies are being deployed to acquire data for quantitative studies of complex traits. When applied to a large number of plants such as canola plants, however, a complete three-dimensional (3D) model is time-consuming and expensive for high-throughput phenotyping with an enormous amount of data. In some contexts, a full rebuild of entire plants may not be necessary. In recent years, many 3D plan phenotyping techniques with high cost and large-scale facilities have been introduced to extract plant phenotypic traits, but these applications may be affected by limited research budgets and cross environments. This thesis proposed a low-cost depth and high-throughput phenotyping mobile platform to measure canola plant traits in cross environments. Methods included detecting and counting canola branches and seedpods, monitoring canola growth stages, and fusing color images to improve images resolution and achieve higher accuracy. Canola plant traits were examined in both controlled environment and field scenarios. These methodologies were enhanced by different imaging techniques. Results revealed that this phenotyping mobile platform can be used to investigate canola plant traits in cross environments with high accuracy. The results also show that algorithms for counting canola branches and seedpods enable crop researchers to analyze the relationship between canola genotypes and phenotypes and estimate crop yields. In addition to counting algorithms, fusing techniques can be helpful for plant breeders with more comfortable access plant characteristics by improving the definition and resolution of color images. These findings add value to the automation, low-cost depth and high-throughput phenotyping for canola plants. These findings also contribute a novel multi-focus image fusion that exhibits a competitive performance with outperforms some other state-of-the-art methods based on the visual saliency maps and gradient domain fast guided filter. This proposed platform and counting algorithms can be applied to not only canola plants but also other closely related species. The proposed fusing technique can be extended to other fields, such as remote sensing and medical image fusion

    Improved depth recovery in consumer depth cameras via disparity space fusion within cross-spectral stereo.

    Get PDF
    We address the issue of improving depth coverage in consumer depth cameras based on the combined use of cross-spectral stereo and near infra-red structured light sensing. Specifically we show that fusion of disparity over these modalities, within the disparity space image, prior to disparity optimization facilitates the recovery of scene depth information in regions where structured light sensing fails. We show that this joint approach, leveraging disparity information from both structured light and cross-spectral sensing, facilitates the joint recovery of global scene depth comprising both texture-less object depth, where conventional stereo otherwise fails, and highly reflective object depth, where structured light (and similar) active sensing commonly fails. The proposed solution is illustrated using dense gradient feature matching and shown to outperform prior approaches that use late-stage fused cross-spectral stereo depth as a facet of improved sensing for consumer depth cameras

    Two-stage fusion of thermal hyperspectral and visible RGB image by PCA and guided filter

    Get PDF
    Nowadays, advanced technology in remote sensing allows us to get multi-sensor and multi-resolution data from the same region. Fusion of these data sources for classification remains challenging problems. In this paper, we propose a novel algorithm for hyperspectral (HS) image pansharpening with two stage guided filtering in PCA (principal component analysis) domain. In the first stage, we first downsample the high resolution RGB image to the same spatial resolution of original low-resolution HS image, and use guided filter to transfer the image details (e.g. edge) of the downsampled RGB image to the original HS image in the PCA domain In the second stage, we perform upsampling on the resulting HS image from the first stage by using original high-resolution RGB image and guided filter in PCA domain. This yields a clear improvement over an older approach with one stage guided filtering in PCA domain. Experimental results on fusion of a low spatial-resolution Thermal Infrared HS image and a high spatial-resolution visible RGB image from the 2014 IEEE GRSS Data Fusion Contest, are very encouraging

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations
    corecore