120 research outputs found

    Robust Estimation of Mahalanobis Distance in Hyperspectral Images

    Get PDF
    This dissertation develops new estimation methods that fit Johnson distributions and generalized Pareto distributions to hyperspectral Mahalanobis distances. The Johnson distribution fit is optimized using a new method which monitors the second derivative behavior of exceedance probability to mitigate potential outlier effects. This univariate distribution is then used to derive an elliptically contoured multivariate density model for the pixel data. The generalized Pareto distribution models are optimized by a new two-pass method that estimates the tail-index parameter. This method minimizes the mean squared fitting error by correcting parameter values using data distance information from an initial pass. A unique method for estimating the posterior density of the tail-index parameter for generalized Pareto models is also developed. Both the Johnson and Pareto distribution models are shown to reduce fitting error and to increase computational efficiency compared to previous models

    Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    Get PDF
    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors, and are practical for use in an operational environment. The proposed anomaly detection methodology adapts multivariate outlier detection algorithms for use with hyperspectral datasets containing tens of thousands of non-homogeneous, high-dimensional spectral signatures. In so doing, the limitations of existing, non-robust, anomaly detectors are identified, an autonomous clustering methodology is developed to divide an image into homogeneous background materials, and competing multivariate outlier detection methods are evaluated for their ability to uncover hyperspectral anomalies. To arrive at a final detection algorithm, robust parameter design methods are employed to determine parameter settings that achieve good detection performance over a range of hyperspectral images and targets, thereby removing the burden of these decisions from the user. The final anomaly detection algorithm is tested against existing local and global anomaly detectors, and is shown to achieve superior detection accuracy when applied to a diverse set of hyperspectral images. The proposed signature matching methodology employs image-based atmospheric correction techniques in an automated process to transform a target reflectance signature library into a set of image signatures. This set of signatures is combined with an existing linear filter to form a target detector that is shown to perform as well or better relative to detectors that rely on complicated, information-intensive, atmospheric correction schemes. The performance of the proposed methodology is assessed using a range of target materials in both woodland and desert hyperspectral scenes

    Matched filter stochastic background characterization for hyperspectral target detection

    Get PDF
    Algorithms exploiting hyperspectral imagery for target detection have continually evolved to provide improved detection results. Adaptive matched filters, which may be derived in many different scientific fields, can be used to locate spectral targets by modeling scene background as either structured geometric) with a set of endmembers (basis vectors) or as unstructured stochastic) with a covariance matrix. In unstructured background research, various methods of calculating the background covariance matrix have been developed, each involving either the removal of target signatures from the background model or the segmenting of image data into spatial or spectral subsets. The objective of these methods is to derive a background which matches the source of mixture interference for the detection of sub pixel targets, or matches the source of false alarms in the scene for the detection of fully resolved targets. In addition, these techniques increase the multivariate normality of the data from which the background is characterized, thus increasing adherence to the normality assumption inherent in the matched filter and ultimately improving target detection results. Such techniques for improved background characterization are widely practiced but not well documented or compared. This thesis will establish a strong theoretical foundation, describing the necessary preprocessing of hyperspectral imagery, deriving the spectral matched filter, and capturing current methods of unstructured background characterization. The extensive experimentation will allow for a comparative evaluation of several current unstructured background characterization methods as well as some new methods which improve stochastic modeling of the background. The results will show that consistent improvements over the scene-wide statistics can be achieved through spatial or spectral subsetting, and analysis of the results provides insight into the tradespaces of matching the interference, background multivariate normality and target exclusion for these techniques

    Détection robuste de cibles en imagerie Hyperspectrale.

    Get PDF
    Hyperspectral imaging (HSI) extends from the fact that for any given material, the amount of emitted radiation varies with wavelength. HSI sensors measure the radiance of the materials within each pixel area at a very large number of contiguous spectral bands and provide image data containing both spatial and spectral information. Classical adaptive detection schemes assume that the background is zero-mean Gaussian or with known mean vector that can be exploited. However, when the mean vector is unknown, as it is the case for hyperspectral imaging, it has to be included in the detection process. We propose in this work an extension of classical detection methods when both covariance matrix and mean vector are unknown.However, the actual multivariate distribution of the background pixels may differ from the generally used Gaussian hypothesis. The class of elliptical distributions has already been popularized for background characterization in HSI. Although these non-Gaussian models have been exploited for background modeling and detection schemes, the parameters estimation (covariance matrix, mean vector) is usually performed using classical Gaussian-based estimators. We analyze here some robust estimation procedures (M-estimators of location and scale) more suitable when non-Gaussian distributions are assumed. Jointly used with M-estimators, these new detectors allow to enhance the target detection performance in non-Gaussian environment while keeping the same performance than the classical detectors in Gaussian environment. Therefore, they provide a unified framework for target detection and anomaly detection in HSI.L'imagerie hyperspectrale (HSI) repose sur le fait que, pour un matériau donné, la quantité de rayonnement émis varie avec la longueur d'onde. Les capteurs HSI mesurent donc le rayonnement des matériaux au sein de chaque pixel pour un très grand nombre de bandes spectrales contiguës et fournissent des images contenant des informations à la fois spatiale et spectrale. Les méthodes classiques de détection adaptative supposent généralement que le fond est gaussien à vecteur moyenne nul ou connu. Cependant, quand le vecteur moyen est inconnu, comme c'est le cas pour l'image hyperspectrale, il doit être inclus dans le processus de détection. Nous proposons dans ce travail d'étendre les méthodes classiques de détection pour lesquelles la matrice de covariance et le vecteur de moyenne sont tous deux inconnus.Cependant, la distribution statistique multivariée des pixels de l'environnement peut s'éloigner de l'hypothèse gaussienne classiquement utilisée. La classe des distributions elliptiques a été déjà popularisée pour la caractérisation de fond pour l’HSI. Bien que ces modèles non gaussiens aient déjà été exploités dans la modélisation du fond et dans la conception de détecteurs, l'estimation des paramètres (matrice de covariance, vecteur moyenne) est encore généralement effectuée en utilisant des estimateurs conventionnels gaussiens. Dans ce contexte, nous analysons de méthodes d’estimation robuste plus appropriées à ces distributions non-gaussiennes : les M-estimateurs. Ces méthodes de détection couplées à ces nouveaux estimateurs permettent d'une part, d'améliorer les performances de détection dans un environment non-gaussien mais d'autre part de garder les mêmes performances que celles des détecteurs conventionnels dans un environnement gaussien. Elles fournissent ainsi un cadre unifié pour la détection de cibles et la détection d'anomalies pour la HSI

    An enhanced sequential exception technique for semantic-based text anomaly detection

    Get PDF
    The detection of semantic-based text anomaly is an interesting research area which has gained considerable attention from the data mining community. Text anomaly detection identifies deviating information from general information contained in documents. Text data are characterized by having problems related to ambiguity, high dimensionality, sparsity and text representation. If these challenges are not properly resolved, identifying semantic-based text anomaly will be less accurate. This study proposes an Enhanced Sequential Exception Technique (ESET) to detect semantic-based text anomaly by achieving five objectives: (1) to modify Sequential Exception Technique (SET) in processing unstructured text; (2) to optimize Cosine Similarity for identifying similar and dissimilar text data; (3) to hybridize modified SET with Latent Semantic Analysis (LSA); (4) to integrate Lesk and Selectional Preference algorithms for disambiguating senses and identifying text canonical form; and (5) to represent semantic-based text anomaly using First Order Logic (FOL) and Concept Network Graph (CNG). ESET performs text anomaly detection by employing optimized Cosine Similarity, hybridizing LSA with modified SET, and integrating it with Word Sense Disambiguation algorithms specifically Lesk and Selectional Preference. Then, FOL and CNG are proposed to represent the detected semantic-based text anomaly. To demonstrate the feasibility of the technique, four selected datasets namely NIPS data, ENRON, Daily Koss blog, and 20Newsgroups were experimented on. The experimental evaluation revealed that ESET has significantly improved the accuracy of detecting semantic-based text anomaly from documents. When compared with existing measures, the experimental results outperformed benchmarked methods with an improved F1-score from all datasets respectively; NIPS data 0.75, ENRON 0.82, Daily Koss blog 0.93 and 20Newsgroups 0.97. The results generated from ESET has proven to be significant and supported a growing notion of semantic-based text anomaly which is increasingly evident in existing literatures. Practically, this study contributes to topic modelling and concept coherence for the purpose of visualizing information, knowledge sharing and optimized decision making

    The Journal of Conventional Weapons Destruction Issue 21.1 (2017)

    Get PDF
    Feature: Improvised Explosive Devices (IED) and Pressure Plate IED\u27s Spotlight: Bosnia and Herzegovina 2- years later Field Notes Research and Developmen

    Error characterization of spectral products using a factorial designed experiment

    Get PDF
    The main objective of any imaging system is to collect information. Information is conveyed in remotely sensed imagery by the spatial and spectral distribution of the energy reflected/emitted from the earth. This energy is subsequently captured by an overhead imaging system. Post-processing algorithms, which rely on this spectral and spatial energy distribution, allow us to extract useful information from the collected data. Typically, spectral processing algorithms include such procedures as target detection, thematic mapping and spectral pixel unmixing. The final spectral products from these algorithms include detection maps, classification maps and endmember fraction maps. The spatial resolution, spectral sampling and signal-to-noise characteristics of a spectral imaging system share a strong relationship with one another based on the law of conservation of energy. If any one of these initial image collection parameters were changed then we would expect the accuracy of the information derived from the spectral processing algorithms to also change. The goal of this thesis study was to investigate the accuracy and effectiveness of spectral processing algorithms under different image levels of spectral resolution, spatial resolution and noise. In order to fulfill this goal a tool was developed that degrades hyperspectral images spatially, spectrally and by adding spectrally correlated noise. These degraded images were then subjected to several spectral processing algorithms. The information utility and error characterization of these degraded spectral products is assessed using algorithm-specific metrics. By adopting a factorial designed experimental approach, the joint effects of spatial resolution, spectral sampling and signal-to-noise with respect to algorithm performance was also studied. Finally, a quantitative performance comparison of the tested spectral processing algorithms was made

    High throughput instruments, methods, and informatics for systems biology.

    Full text link

    Bayesian image restoration and bacteria detection in optical endomicroscopy

    Get PDF
    Optical microscopy systems can be used to obtain high-resolution microscopic images of tissue cultures and ex vivo tissue samples. This imaging technique can be translated for in vivo, in situ applications by using optical fibres and miniature optics. Fibred optical endomicroscopy (OEM) can enable optical biopsy in organs inaccessible by any other imaging systems, and hence can provide rapid and accurate diagnosis in a short time. The raw data the system produce is difficult to interpret as it is modulated by a fibre bundle pattern, producing what is called the “honeycomb effect”. Moreover, the data is further degraded due to the fibre core cross coupling problem. On the other hand, there is an unmet clinical need for automatic tools that can help the clinicians to detect fluorescently labelled bacteria in distal lung images. The aim of this thesis is to develop advanced image processing algorithms that can address the above mentioned problems. First, we provide a statistical model for the fibre core cross coupling problem and the sparse sampling by imaging fibre bundles (honeycomb artefact), which are formulated here as a restoration problem for the first time in the literature. We then introduce a non-linear interpolation method, based on Gaussian processes regression, in order to recover an interpretable scene from the deconvolved data. Second, we develop two bacteria detection algorithms, each of which provides different characteristics. The first approach considers joint formulation to the sparse coding and anomaly detection problems. The anomalies here are considered as candidate bacteria, which are annotated with the help of a trained clinician. Although this approach provides good detection performance and outperforms existing methods in the literature, the user has to carefully tune some crucial model parameters. Hence, we propose a more adaptive approach, for which a Bayesian framework is adopted. This approach not only outperforms the proposed supervised approach and existing methods in the literature but also provides computation time that competes with optimization-based methods
    corecore