106 research outputs found

    Automated Remote Sensing Image Interpretation with Limited Labeled Training Data

    Get PDF
    Automated remote sensing image interpretation has been investigated for more than a decade. In early years, most work was based on the assumption that there are sufficient labeled samples to be used for training. However, ground-truth collection is a very tedious and time-consuming task and sometimes very expensive, especially in the field of remote sensing that usually relies on field surveys to collect ground truth. In recent years, as the development of advanced machine learning techniques, remote sensing image interpretation with limited ground-truth has caught the attention of researchers in the fields of both remote sensing and computer science. Three approaches that focus on different aspects of the interpretation process, i.e., feature extraction, classification, and segmentation, are proposed to deal with the limited ground truth problem. First, feature extraction techniques, which usually serve as a pre-processing step for remote sensing image classification are explored. Instead of only focusing on feature extraction, a joint feature extraction and classification framework is proposed based on ensemble local manifold learning. Second, classifiers in the case of limited labeled training data are investigated, and an enhanced ensemble learning method that outperforms state-of-the-art classification methods is proposed. Third, image segmentation techniques are investigated, with the aid of unlabeled samples and spatial information. A semi-supervised self-training method is proposed, which is capable of expanding the number of training samples by its own and hence improving classification performance iteratively. Experiments show that the proposed approaches outperform state-of-the-art techniques in terms of classification accuracy on benchmark remote sensing datasets.4 month

    Sea-Ice Detection from RADARSAT Images by Gamma-based Bilateral Filtering

    Get PDF
    Spaceborne Synthetic Aperture Radar (SAR) is commonly considered a powerful sensor to detect sea ice. Unfortunately, the sea-ice types in SAR images are difficult to be interpreted due to speckle noise. SAR image denoising therefore becomes a critical step of SAR sea-ice image processing and analysis. In this study, a two-phase approach is designed and implemented for SAR sea-ice image segmentation. In the first phase, a Gamma-based bilateral filter is introduced and applied for SAR image denoising in the local domain. It not only perfectly inherits the conventional bilateral filter with the capacity of smoothing SAR sea-ice imagery while preserving edges, but also enhances it based on the homogeneity in local areas and Gamma distribution of speckle noise. The Gamma-based bilateral filter outperforms other widely used filters, such as Frost filter and the conventional bilateral filter. In the second phase, the K-means clustering algorithm, whose initial centroids are optimized, is adopted in order to obtain better segmentation results. The proposed approach is tested using both simulated and real SAR images, compared with several existing algorithms including K-means, K-means based on the Frost filtered images, and K-means based on the conventional bilateral filtered images. The F1 scores of the simulated results demonstrate the effectiveness and robustness of the proposed approach whose overall accuracies maintain higher than 90% as variances of noise range from 0.1 to 0.5. For the real SAR images, the proposed approach outperforms others with average overall accuracy of 95%

    A VISION-BASED QUALITY INSPECTION SYSTEM FOR FABRIC DEFECT DETECTION AND CLASSIFICATION

    Get PDF
    Published ThesisQuality inspection of textile products is an important issue for fabric manufacturers. It is desirable to produce the highest quality goods in the shortest amount of time possible. Fabric faults or defects are responsible for nearly 85% of the defects found by the garment industry. Manufacturers recover only 45 to 65% of their profits from second or off-quality goods. There is a need for reliable automated woven fabric inspection methods in the textile industry. Numerous methods have been proposed for detecting defects in textile. The methods are generally grouped into three main categories according to the techniques they use for texture feature extraction, namely statistical approaches, spectral approaches and model-based approaches. In this thesis, we study one method from each category and propose their combinations in order to get improved fabric defect detection and classification accuracy. The three chosen methods are the grey level co-occurrence matrix (GLCM) from the statistical category, the wavelet transform from the spectral category and the Markov random field (MRF) from the model-based category. We identify the most effective texture features for each of those methods and for different fabric types in order to combine them. Using GLCM, we identify the optimal number of features, the optimal quantisation level of the original image and the optimal intersample distance to use. We identify the optimal GLCM features for different types of fabrics and for three different classifiers. Using the wavelet transform, we compare the defect detection and classification performance of features derived from the undecimated discrete wavelet and those derived from the dual-tree complex wavelet transform. We identify the best features for different types of fabrics. Using the Markov random field, we study the performance for fabric defect detection and classification of features derived from different models of Gaussian Markov random fields of order from 1 through 9. For each fabric type we identify the best model order. Finally, we propose three combination schemes of the best features identified from the three methods and study their fabric detection and classification performance. They lead generally to improved performance as compared to the individual methods, but two of them need further improvement

    Multimodal Image Fusion and Its Applications.

    Full text link
    Image fusion integrates different modality images to provide comprehensive information of the image content, increasing interpretation capabilities and producing more reliable results. There are several advantages of combining multi-modal images, including improving geometric corrections, complementing data for improved classification, and enhancing features for analysis...etc. This thesis develops the image fusion idea in the context of two domains: material microscopy and biomedical imaging. The proposed methods include image modeling, image indexing, image segmentation, and image registration. The common theme behind all proposed methods is the use of complementary information from multi-modal images to achieve better registration, feature extraction, and detection performances. In material microscopy, we propose an anomaly-driven image fusion framework to perform the task of material microscopy image analysis and anomaly detection. This framework is based on a probabilistic model that enables us to index, process and characterize the data with systematic and well-developed statistical tools. In biomedical imaging, we focus on the multi-modal registration problem for functional MRI (fMRI) brain images which improves the performance of brain activation detection.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120701/1/yuhuic_1.pd

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of āˆ¼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Analyse et traitement de signaux partiellement polariseĢs SyntheĢ€se des travaux de recherche en vue de lā€™obtention du diploĢ‚me dā€™habilitation aĢ€ diriger des recherches

    Get PDF
    La syntheĢ€se dā€™une activiteĢ scientifique meneĢe pendant une dizaine dā€™anneĢes est lā€™occasion dā€™effectuer un bilan sur la strateĢgie de recherche conduite. Depuis ma theĢ€se en sismique jusquā€™aĢ€ mes travaux actuels en imagerie RADAR et en optique statistique, le fil conducteur est la prise en compte de la polarisation des signaux pour leur analyse et leur traitement.Ma motivation scientifique est de montrer quā€™une analyse rigoureuse de signaux polarimeĢtriques contribue au deĢveloppement dā€™un traitement adapteĢ aĢ€ ces donneĢes et peut aider aĢ€ la conception des systeĢ€mes dā€™acquisition. Les deĢveloppements meĢthodologiques preĢsenteĢs ont pour objectif de caracteĢriser lā€™information contenue dans les donneĢes polarimeĢtriques en sā€™appuyant sur des outils statistiques et en prenant en compte lā€™analyse des pheĢnomeĢ€nes physiques.Pour la reĢdaction de ce document, il mā€™a sembleĢ inteĢressant de commencer par un premier chapitre introductif sur la polarisation. Dans ce chapitre, dā€™une part jā€™explique pourquoi je me suis inteĢresseĢ aĢ€ la polarisation lors de mon doctorat portant sur lā€™analyse de signaux sismiques. Dā€™autre part, jā€™y preĢsente un rapide historique sur la polarisation en optique et ainsi que les principaux concepts lieĢs aĢ€ lā€™analyse des proprieĢteĢs de polarisation en optique et en imagerie RADAR aĢ€ syntheĢ€se dā€™ouverture.Le deuxieĢ€me chapitre porte sur lā€™analyse de la coheĢrence de la lumieĢ€re partiellement polariseĢe. Depuis 2003, cette probleĢmatique motive de nombreux travaux en optique statistique. Lors de mon arriveĢe aĢ€ lā€™institut Fresnel en novembre 2005, Philippe ReĢfreĢgier mā€™a rapidement associeĢ aĢ€ ses travaux sur ce sujet. Contrairement aĢ€ ce que lā€™on pourrait croire, les proprieĢteĢs de coheĢrence de la lumieĢ€re partiellement polariseĢe ont eĢteĢ relativement peu exploreĢes. En effet, meĢ‚me si, dā€™une part, lā€™analyse polarimeĢtrique a connu ces dernieĢ€res anneĢes un deĢveloppement treĢ€s important et que, dā€™autre part, la coheĢrence des ondes totalement polariseĢes est exploiteĢe depuis de treĢ€s nombreuses anneĢes, le meĢlange de ces deux caracteĢristiques a eĢteĢ peu eĢtudieĢ jusquā€™aĢ€ preĢsent.Le troisieĢ€me chapitre porte sur lā€™estimation de parameĢ€tres de veĢgeĢtation en imagerie Radar aĢ€ syntheĢ€se dā€™ouverture polarimeĢtrique et interfeĢromeĢtrique. Il sā€™agit dā€™un domaine ouĢ€ la polarisation et la coheĢrence partielle des ondes sont exploiteĢes pour une application dont lā€™enjeu socieĢtal est important puisquā€™il sā€™agit de lā€™eĢtude de la biomasse aĢ€ lā€™eĢchelle planeĢtaire. Depuis 2009, date aĢ€ laquelle jā€™ai commenceĢ aĢ€ mā€™inteĢresser aĢ€ cette theĢmatique, nous avons obtenu avec Philippe ReĢfreĢgier, AureĢlien Arnaubec et Pascale Dubois-Fernandez plusieurs reĢsultats sur la caracteĢrisation des performances de cette technique dā€™imagerie. Avoir un systeĢ€me polarimeĢtrique et interfeĢromeĢtrique fournit des donneĢes riches, mais complexes aĢ€ interpreĢter. Depuis que ce type de donneĢes est accessible dans le cadre de lā€™analyse environnementale de la biomasse, la plupart des eĢtudes se sont focaliseĢes : soit sur la proposition de nouveaux algorithmes de traitement pour lā€™estima- tion des parameĢ€tres de veĢgeĢtation, soit sur lā€™ameĢlioration des modeĢ€les de description des meĢca- nismes de reĢtro-diffusion. Comme cela est expliqueĢ dans le troisieĢ€me chapitre, notre contribution est compleĢmentaire aĢ€ ces travaux puisquā€™elle consiste aĢ€ quantifier la preĢcision des algorithmes dā€™estimation au vu de la quantiteĢ dā€™information disponible dans les donneĢes, et en fonction du modeĢ€le physique utiliseĢ pour deĢcrire ces donneĢes

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
    • ā€¦
    corecore