94 research outputs found

    A subpixel target detection algorithm for hyperspectral imagery

    Get PDF
    The goal of this research is to develop a new algorithm for the detection of subpixel scale target materials on the hyperspectral imagery. The signal decision theory is typically to decide the existence of a target signal embedded in the random noise. This implies that the detection problem can be mathematically formalized by signal decision theory based on the statistical hypothesis test. In particular, since any target signature provided by airborne/spaceborne sensors is embedded in a structured noise such as background or clutter signatures as well as broad band unstructured noise, the problem becomes more complicated, and particularly much more under the unknown noise structure. The approach is based on the statistical hypothesis method known as Generalized Likelihood Ratio Test (GLRT). The use of GLRT requires estimating the unknown parameters, and assumes the prior information of two subspaces describing target variation and background variation respectively. Therefore, this research consists of two parts, the implementation of GLRT and the characterization of two subspaces through new approaches. Results obtained from computer simulation, HYDICE image and AVI RIS image show that this approach is feasible

    An efficient methodology to simulate mixed spectral signatures of land covers through Field Radiometry data

    Get PDF
    An efficient methodology to simulate mixed spectral signatures of land covers, from endmember data, using linear statistical modelling based on the least squares estimation approach, is proposed. The optimal set of endmember has been obtained by measurements in situ with a field spectroradiometer GER 1500. Also, it is proposed the use of new sub-pixel methods based on statistics and certain “units of sampling” to apply to the landscapes. The resultant point estimations for these new units will be the “observations” and all of them will carry out an special role to simulate the final spectral signature. This methodology is used to simulate spectral signatures of a Mediterranean forest landscape near to Madrid (Spain). Furthermore the spectral signature model obtained through Field Radiometry data will be correlated with the image data of the same zone provided by the Landsat 7 Enhaced Thematic Mapper Plus (ETM+) sensor once corrected. The results obtained in correlation studies seem to conclude its efficiency. At the same time, the results open new research guidelines

    Using Lidar to geometrically-constrain signature spaces for physics-based target detection

    Get PDF
    A fundamental task when performing target detection on spectral imagery is ensuring that a target signature is in the same metric domain as the measured spectral data set. Remotely sensed data are typically collected in digital counts and calibrated to radiance. That is, calibrated data have units of spectral radiance, while target signatures in the visible regime are commonly characterized in units of re°ectance. A necessary precursor to running a target detection algorithm is converting the measured scene data and target signature to the same domain. Atmospheric inversion or compensation is a well-known method for transforming mea- sured scene radiance values into the re°ectance domain. While this method may be math- ematically trivial, it is computationally attractive and is most e®ective when illumination conditions are constant across a scene. However, when illumination conditions are not con- stant for a given scene, signi¯cant error may be introduced when applying the same linear inversion globally. In contrast to the inversion methodology, physics-based forward modeling approaches aim to predict the possible ways that a target might appear in a scene using atmospheric and radiometric models. To fully encompass possible target variability due to changing illumination levels, a target vector space is created. In addition to accounting for varying illumination, physics-based model approaches have a distinct advantage in that they can also incorporate target variability due to a variety of other sources, to include adjacency target orientation, and mixed pixels. Increasing the variability of the target vector space may be beneficial in a global sense in that it may allow for the detection of difficult targets, such as shadowed or partially concealed targets. However, it should also be noted that expansion of the target space may introduce unnecessary confusion for a given pixel. Furthermore, traditional physics-based approaches make certain assumptions which may be prudent only when passive, spectral data for a scene are available. Common examples include the assumption of a °at ground plane and pure target pixels. Many of these assumptions may be attributed to the lack of three-dimensional (3D) spatial information for the scene. In the event that 3D spatial information were available, certain assumptions could be levied, allowing accurate geometric information to be fed to the physics-based model on a pixel- by-pixel basis. Doing so may e®ectively constrain the physics-based model, resulting in a pixel-specific target space with optimized variability and minimized confusion. This body of work explores using spatial information from a topographic Light Detection and Ranging (Lidar) system as a means to enhance the delity of physics-based models for spectral target detection. The incorporation of subpixel spatial information, relative to a hyperspectral image (HSI) pixel, provides valuable insight about plausible geometric con¯gurations of a target, background, and illumination sources within a scene. Methods for estimating local geometry on a per-pixel basis are introduced; this spatial information is then fed into a physics-based model to the forward prediction of a target in radiance space. The target detection performance based on this spatially-enhanced, spectral target space is assessed relative to current state-of-the-art spectral algorithms

    Does independent component analysis play a role in unmixing hyperspectral data?

    Full text link

    Hyperspectral Endmember Extraction Techniques

    Get PDF
    Hyperspectral data processing and analysis mainly plays a vital role in detection, identification, discrimination and estimation of earth surface materials. It involves atmospheric correction, dimensionality reduction, endmember extraction, spectral unmixing and classification phases. One of the ultimate aims of hyperspectral data processing and analysis is to achieve high classification accuracy. The classification accuracy of hyperspectral data most probably depends upon image-derived endmembers. Ideally, an endmember is defined as a spectrally unique, idealized and pure signature of a surface material. Extraction of consistent and desired endmember is one of the important criteria to achieve the high accuracy of hyperspectral data classification and spectral unmixing. Several methods, strategies and algorithms are proposed by various researchers to extract the endmembers from hyperspectral imagery. Most of these techniques and algorithms are significantly dependent on user-defined input parameters, and this issue is subjective because there is no standard specificity about these input parameters. This leads to inconsistencies in overall endmember extraction. To resolve the aforementioned problems, systematic, generic, robust and automated mechanism of endmember extraction is required. This chapter gives and highlights the generic approach of endmember extraction with popular algorithm limitations and challenges

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Band Selection for Hyperspectral Images Using Non-Negativity Constraints

    Get PDF
    This paper presents a new factorization technique for hyperspectral signal processing based on a constrained singular value decomposition (SVD) approach. Hyperpectral images typically have a large number of contiguous bands that are highly correlated. Likewise the field of view typically contains a limited number of materials and the spectra are also correlated. Only a selected number of bands, the extreme bands that include the dominant materials spectral signatures, are needed to express the data. Factorization can provide a means for interpretation and compression of the spectral data. Hyperspectral images are represented as non-negative matrices by graphic concatenation, with the pixels arranged into columns and each row corresponding to a spectral band. SVD and principal component analysis enjoy a broad range of applications, including, rank estimation, noise reduction, classification and compression, with the resulting singular vectors forming orthogonal basis sets for subspace projection techniques. A key property of non-negative matrices is that their columns/rows form non-negative cones, with any non-negative linear combination of the columns/rows belonging to the cone. Data sets of spectral images and time series reside in non-negative orthants and while subspaces spanned by SVD include all orthants, SVD projections can be constrained to the non-negative orthants. In this paper we utilize constraint sets that confine projections of SVD singular vectors to lie within the cones formed by the spectral data. The extreme vectors of the cone are found and these vectors form a basis for the factorization of the data. The approach is illustrated in an application to hyperspectral data of a mining area collected by an airborne sensor
    corecore