2,216 research outputs found

    A Sensitivity Study of L-Band Synthetic Aperture Radar Measurements to the Internal Variations and Evolving Nature of Oil Slicks

    Get PDF
    This thesis focuses on the use of multi-polarization synthetic aperture radar (SAR) for characterization of marine oil spills. In particular, the potential of detecting internal zones within oil slicks in SAR scenes are investigated by a direct within-slick segmentation scheme, along with a sensitivity study of SAR measurements to the evolving nature of oil slicks. A simple, k-means clustering algorithm, along with a Gaussian Mixture Model are separately applied, giving rise to a comparative study of the internal class structures obtained by both strategies. As no optical imagery is available for verification, the within-slick segmentations are evaluated with respect to the behavior of a set of selected polarimetric features, the prevailing wind conditions and weathering processes. In addition, a fake zone detection scheme is established to help determine if the class structures obtained potentially reflect actual internal variations within the slicks. Further, the evolving nature of oil slicks is studied based on the temporal development of a set of selected geometric region descriptors. Two data sets are available for the investigation presented in this thesis, both captured by a full-polarization L-band airborne SAR system with high spatial- and temporal resolution. The results obtained with respect to the zone detection scheme developed supports the hypothesis of the existence of detectable zones within oil spills in SAR scenes. Additionally, the method established for studying the evolving nature of oil slicks is found convenient for accessing the general behavior of the slicks, and simplifies interpretation

    Adaptive text mining: Inferring structure from sequences

    Get PDF
    Text mining is about inferring structure from sequences representing natural language text, and may be defined as the process of analyzing text to extract information that is useful for particular purposes. Although hand-crafted heuristics are a common practical approach for extracting information from text, a general, and generalizable, approach requires adaptive techniques. This paper studies the way in which the adaptive techniques used in text compression can be applied to text mining. It develops several examples: extraction of hierarchical phrase structures from text, identification of keyphrases in documents, locating proper names and quantities of interest in a piece of text, text categorization, word segmentation, acronym extraction, and structure recognition. We conclude that compression forms a sound unifying principle that allows many text mining problems to be tacked adaptively

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Image segmentation evaluation using an integrated framework

    Get PDF
    In this paper we present a general framework we have developed for running and evaluating automatic image and video segmentation algorithms. This framework was designed to allow effortless integration of existing and forthcoming image segmentation algorithms, and allows researchers to focus more on the development and evaluation of segmentation methods, relying on the framework for encoding/decoding and visualization. We then utilize this framework to automatically evaluate four distinct segmentation algorithms, and present and discuss the results and statistical findings of the experiment

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Robust Mote-Scale Classification of Noisy Data via Machine Learning

    Get PDF

    Study of retrofitted system for Intelligent Compaction Analyzer, a machine learning approach for Quality Control of Asphalt Pavement during Construction

    Get PDF
    Asphalt pavements play a vital role in transportation infrastructure, but their performance can suffer due to subpar quality resulting from improper construction practices. To tackle this issue, we introduce the Retrofit Intelligent Compaction Analyzer (RICA), a real-time compaction density estimation system for asphalt pavements during construction. RICA utilizes machine learning principles and machine learning to predict compaction density based on received vibratory patterns at different compaction levels. By leveraging the roller's spatial location and analyzing vibration patterns, RICA delivers density estimates.In this study, we gathered data from actual construction sites, implementing RICA on a Caterpillar CB-10 Rotary dialed dual drum vibratory compactor. The density estimates from RICA were validated against densities measured from roadway cores extracted randomly on the compacted pavement. Our findings affirm the efficacy of RICA in providing reliable density estimates for asphalt pavements.The ability of RICA to provide real-time, nondestructive compaction information to the roller operator establishes its value as a quality control tool during asphalt pavement construction. By ensuring proper compaction, RICA contributes to the construction of durable, high-quality roads while reducing the financial and environmental costs associated with construction and maintenance. The validation of RICA's estimates with percent within limits (PWL) calculations based on roadway cores further attests to its effectiveness as a Quality Assurance tool

    Remote Sensing for Non‐Technical Survey

    Get PDF
    This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed

    Similarity-based methods for machine diagnosis

    Get PDF
    This work presents a data-driven condition-based maintenance system based on similarity-based modeling (SBM) for automatic machinery fault diagnosis. The proposed system provides information about the equipment current state (degree of anomaly), and returns a set of exemplars that can be employed to describe the current state in a sparse fashion, which can be examined by the operator to assess a decision to be made. The system is modular and data-agnostic, enabling its use in different equipment and data sources with small modifications. The main contributions of this work are: the extensive study of the proposition and use of multiclass SBM on different databases, either as a stand-alone classification method or in combination with an off-the-shelf classifier; novel methods for selecting prototypes for the SBM models; the use of new similarity functions; and a new production-ready fault detection service. These contributions achieved the goal of increasing the SBM models performance in a fault classification scenario while reducing its computational complexity. The proposed system was evaluated in three different databases, achieving higher or similar performance when compared with previous works on the same database. Comparisons with other methods are shown for the recently developed Machinery Fault Database (MaFaulDa) and for the Case Western Reserve University (CWRU) bearing database. The proposed techniques increase the generalization power of the similarity model and of the associated classifier, having accuracies of 98.5% on MaFaulDa and 98.9% on CWRU database. These results indicate that the proposed approach based on SBM is worth further investigation.Este trabalho apresenta um sistema de manutenção preditiva para diagnóstico automático de falhas em máquinas. O sistema proposto, baseado em uma técnica denominada similarity-based modeling (SBM), provê informações sobre o estado atual do equipamento (grau de anomalia), e retorna um conjunto de amostras representativas que pode ser utilizado para descrever o estado atual de forma esparsa, permitindo a um operador avaliar a melhor decisão a ser tomada. O sistema é modular e agnóstico aos dados, permitindo que seja utilizado em variados equipamentos e dados com pequenas modificações. As principais contribuições deste trabalho são: o estudo abrangente da proposta do classificador SBM multi-classe e o seu uso em diferentes bases de dados, seja como um classificador ou auxiliando outros classificadores comumente usados; novos métodos para a seleção de amostras representativas para os modelos SBM; o uso de novas funções de similaridade; e um serviço de detecção de falhas pronto para ser utilizado em produção. Essas contribuições atingiram o objetivo de melhorar o desempenho dos modelos SBM em cenários de classificação de falhas e reduziram sua complexidade computacional. O sistema proposto foi avaliado em três bases de dados, atingindo desempenho igual ou superior ao desempenho de trabalhos anteriores nas mesmas bases. Comparações com outros métodos são apresentadas para a recém-desenvolvida Machinery Fault Database (MaFaulDa) e para a base de dados da Case Western Reserve University (CWRU). As técnicas propostas melhoraram a capacidade de generalização dos modelos de similaridade e do classificador final, atingindo acurácias de 98.5% na MaFaulDa e 98.9% na base de dados CWRU. Esses resultados apontam que a abordagem proposta baseada na técnica SBM tem potencial para ser investigada em mais profundidade
    corecore