527 research outputs found

    Robust fusion of multi-band images with different spatial and spectral resolutions for change detection

    Get PDF
    Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through different kinds of sensors. More precisely, this paper addresses the problem of detecting changes between two multiband optical images characterized by different spatial and spectral resolutions. This sensor dissimilarity introduces additional issues in the context of operational change detection. To alleviate these issues, classical change detection methods are applied after independent preprocessing steps (e.g., resampling) used to get the same spatial and spectral resolutions for the pair of observed images. Nevertheless, these preprocessing steps tend to throw away relevant information. Conversely, in this paper, we propose a method that more effectively uses the available information by modeling the two observed images as spatial and spectral versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions. As they cover the same scene, these latent images are expected to be globally similar except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. This robust fusion problem is formulated as an inverse problem, which is iteratively solved using an efficient block-coordinate descent algorithm. The proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy

    Fusing Multiple Multiband Images

    Full text link
    We consider the problem of fusing an arbitrary number of multiband, i.e., panchromatic, multispectral, or hyperspectral, images belonging to the same scene. We use the well-known forward observation and linear mixture models with Gaussian perturbations to formulate the maximum-likelihood estimator of the endmember abundance matrix of the fused image. We calculate the Fisher information matrix for this estimator and examine the conditions for the uniqueness of the estimator. We use a vector total-variation penalty term together with nonnegativity and sum-to-one constraints on the endmember abundances to regularize the derived maximum-likelihood estimation problem. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. We solve the resultant convex optimization problem using the alternating direction method of multipliers. We utilize the circular convolution theorem in conjunction with the fast Fourier transform to alleviate the computational complexity of the proposed algorithm. Experiments with multiband images constructed from real hyperspectral datasets reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images

    Analyzing the phenologic dynamics of kudzu (Pueraria montana) infestations using remote sensing and the normalized difference vegetation index.

    Get PDF
    Non-native invasive species are one of the major threats to worldwide ecosystems. Kudzu (Pueraria montana) is a fast-growing vine native to Asia that has invaded regions in the United States making management of this species an important issue. Estimated normalized difference vegetation index (NDVI) values for the years 2000 to 2015 were calculated using data collected by Landsat and MODIS platforms for three infestation sites in Kentucky. The STARFM image-fusing algorithm was used to combine Landsat- and MODIS-derived NDVI into time series with a 30 m spatial resolution and 16 day temporal resolution. The fused time series was decomposed using the Breaks for Additive Season and Trend (BFAST) algorithm. Results showed that fused NDVI could be estimated for the three sites but could not detect changes over time. Combining this method with field data collection and other types of analyses may be useful for kudzu monitoring and management

    Enhanced urban landcover classification for operational change detection study using very high resolution remote sensing data

    Get PDF
    This study presents an operational case of advancements in urban land cover classification and change detection by using very high resolution spatial and multispectral information from 4-band QuickBird (QB) and 8-band WorldView-2 (WV-2) image sequence. Our study accentuates quantitative, pixel based, image difference approach for operational change detection using very high resolution pansharpened QB and WV-2 images captured over San Francisco city, California, USA (37° 44" 30N', 122° 31" 30' W and 37° 41" 30'N,122° 20" 30' W). In addition to standard QB image, we compiled three multiband images from eight pansharpened WV-2 bands: (1) multiband image from four traditional spectral bands, i.e., Blue, Green, Red and near-infrared 1 (NIR1) (henceforth referred as "QB equivalent WV-2"), (2) multiband image from four new spectral bands, i.e., Coastal, Yellow, Red Edge and NIR2 (henceforth referred as "new band WV-2"), and (3) multiband image consisting of four traditional and four new bands (henceforth referred as "standard WV-2"). All the four multiband images were classified using support vector machine (SVM) classifier into four most abundant land cover classes, viz, hard surface, vegetation, water and shadow. The assessment of classification accuracy was performed using random selection of 356 test points. Land cover classifications on "standard QB" image (kappa coeffiecient, κ = 0.93), "QB equivalent WV-2" image (κ = 0.97), and "new band WV-2" image (κ = 0.97) yielded overall accuracies of 96.31, 98.03 and 98.31, respectively, while "standard WV-2" image (κ = 0.99) yielded an improved overall accuracy of 99.18. It is concluded that the addition of four new spectral bands to the existing four traditional bands improved the discrimination of land cover targets, due to increase in the spectral characteristics of WV-2 satellite. Consequently, to test the validity of improvement in classification process for implementation in operational change detection application, comparative assessment of transition of various landcover classes in three WV-2 images with respect to "standard QB" image was carried out using image difference method. As far as waterbody class is concerned, there was no significant transition observed in all the three WorldView-2 Images, whereas, hard surface class showed lowest transition in "standard WV-2" image and highest in case of "new band WV-2". The most significant transition was occurred in vegetation class in all of the three images, showing positive change (increase) in standard WV-2 image (0.31 Sq. Km) and negative change (decrease) in other two images (-0.12 Sq. Km for "QB equivalent WV-2" image and -31.15 Sq. Km in "new band WV-2" image) with considerable amount. Similar case was observed with the shadow class, but the difference is, transition from shadow to other classes was negative in all the three WV-2 images which can be attributed to the fact that, "standard QB" image had more shadow area (based on acquisition time and sun position) than WV-2, that means all the band combinations of WV-2 succeeded in extracting the features hidden below the shadow in "standard QB" image. These trends indicate that the overall bandwise transition in landcover classes in case of "standard WV-2" is more precise than other two images. We note that "QB equivalent WV-2" image had narrower band widths than those of "standard QB" image but the observed vegetation change is not prominent as in case of other two images, but at the same time, transition in hard surface and waterbody was discerned more efficiently than "new band WV-2" image. The addition of new bands in WV-2 enabled more effective vegetation analysis, so the vegetation transition results shown by "new band WV-2" image were at par with the "standard WV-2" image, showing the importance of these newly added bands in the WV-2 imagery, with comparatively lower transitions in other classes. In a nutshell, it can be claimed that incorporation of new bands along with even narrower Red, Green, Blue and Near Infrared-1 bands in WV-2 image holds remarkable importance which leads to enhancement in the potential of WV-2 imagery in change detection and other feature extraction studies

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems
    corecore