116 research outputs found

    New algorithms for the analysis of live-cell images acquired in phase contrast microscopy

    Get PDF
    La détection et la caractérisation automatisée des cellules constituent un enjeu important dans de nombreux domaines de recherche tels que la cicatrisation, le développement de l'embryon et des cellules souches, l’immunologie, l’oncologie, l'ingénierie tissulaire et la découverte de nouveaux médicaments. Étudier le comportement cellulaire in vitro par imagerie des cellules vivantes et par le criblage à haut débit implique des milliers d'images et de vastes quantités de données. Des outils d'analyse automatisés reposant sur la vision numérique et les méthodes non-intrusives telles que la microscopie à contraste de phase (PCM) sont nécessaires. Comme les images PCM sont difficiles à analyser en raison du halo lumineux entourant les cellules et de la difficulté à distinguer les cellules individuelles, le but de ce projet était de développer des algorithmes de traitement d'image PCM dans Matlab® afin d’en tirer de l’information reliée à la morphologie cellulaire de manière automatisée. Pour développer ces algorithmes, des séries d’images de myoblastes acquises en PCM ont été générées, en faisant croître les cellules dans un milieu avec sérum bovin (SSM) ou dans un milieu sans sérum (SFM) sur plusieurs passages. La surface recouverte par les cellules a été estimée en utilisant un filtre de plage de valeurs, un seuil et une taille minimale de coupe afin d'examiner la cinétique de croissance cellulaire. Les résultats ont montré que les cellules avaient des taux de croissance similaires pour les deux milieux de culture, mais que celui-ci diminue de façon linéaire avec le nombre de passages. La méthode de transformée par ondelette continue combinée à l’analyse d'image multivariée (UWT-MIA) a été élaborée afin d’estimer la distribution de caractéristiques morphologiques des cellules (axe majeur, axe mineur, orientation et rondeur). Une analyse multivariée réalisée sur l’ensemble de la base de données (environ 1 million d’images PCM) a montré d'une manière quantitative que les myoblastes cultivés dans le milieu SFM étaient plus allongés et plus petits que ceux cultivés dans le milieu SSM. Les algorithmes développés grâce à ce projet pourraient être utilisés sur d'autres phénotypes cellulaires pour des applications de criblage à haut débit et de contrôle de cultures cellulaires.Automated cell detection and characterization is important in many research fields such as wound healing, embryo development, immune system studies, cancer research, parasite spreading, tissue engineering, stem cell research and drug research and testing. Studying in vitro cellular behavior via live-cell imaging and high-throughput screening involves thousands of images and vast amounts of data, and automated analysis tools relying on machine vision methods and non-intrusive methods such as phase contrast microscopy (PCM) are a necessity. However, there are still some challenges to overcome, since PCM images are difficult to analyze because of the bright halo surrounding the cells and blurry cell-cell boundaries when they are touching. The goal of this project was to develop image processing algorithms to analyze PCM images in an automated fashion, capable of processing large datasets of images to extract information related to cellular viability and morphology. To develop these algorithms, a large dataset of myoblasts images acquired in live-cell imaging (in PCM) was created, growing the cells in either a serum-supplemented (SSM) or a serum-free (SFM) medium over several passages. As a result, algorithms capable of computing the cell-covered surface and cellular morphological features were programmed in Matlab®. The cell-covered surface was estimated using a range filter, a threshold and a minimum cut size in order to look at the cellular growth kinetics. Results showed that the cells were growing at similar paces for both media, but their growth rate was decreasing linearly with passage number. The undecimated wavelet transform multivariate image analysis (UWT-MIA) method was developed, and was used to estimate cellular morphological features distributions (major axis, minor axis, orientation and roundness distributions) on a very large PCM image dataset using the Gabor continuous wavelet transform. Multivariate data analysis performed on the whole database (around 1 million PCM images) showed in a quantitative manner that myoblasts grown in SFM were more elongated and smaller than cells grown in SSM. The algorithms developed through this project could be used in the future on other cellular phenotypes for high-throughput screening and cell culture control applications

    A REVIEW ON MULTIPLE-FEATURE-BASED ADAPTIVE SPARSE REPRESENTATION (MFASR) AND OTHER CLASSIFICATION TYPES

    Get PDF
    A new technique Multiple-feature-based adaptive sparse representation (MFASR) has been demonstrated for Hyperspectral Images (HSI's) classification. This method involves mainly in four steps at the various stages. The spectral and spatial information reflected from the original Hyperspectral Images with four various features. A shape adaptive (SA) spatial region is obtained in each pixel region at the second step. The algorithm namely sparse representation has applied to get the coefficients of sparse for each shape adaptive region in the form of matrix with multiple features. For each test pixel, the class label is determined with the help of obtained coefficients. The performances of MFASR have much better classification results than other classifiers in the terms of quantitative and qualitative percentage of results. This MFASR will make benefit of strong correlations that are obtained from different extracted features and this make use of effective features and effective adaptive sparse representation. Thus, the very high classification performance was achieved through this MFASR technique

    Tensor singular spectral analysis for 3D feature extraction in hyperspectral images.

    Get PDF
    Due to the cubic structure of a hyperspectral image (HSI), how to characterize its spectral and spatial properties in three dimensions is challenging. Conventional spectral-spatial methods usually extract spectral and spatial information separately, ignoring their intrinsic correlations. Recently, some 3D feature extraction methods are developed for the extraction of spectral and spatial features simultaneously, although they rely on local spatial-spectral regions and thus ignore the global spectral similarity and spatial consistency. Meanwhile, some of these methods contain huge model parameters which require a large number of training samples. In this paper, a novel Tensor Singular Spectral Analysis (TensorSSA) method is proposed to extract global and low-rank features of HSI. In TensorSSA, an adaptive embedding operation is first proposed to construct a trajectory tensor corresponding to the entire HSI, which takes full advantage of the spatial similarity and improves the adequate representation of the global low-rank properties of the HSI. Moreover, the obtained trajectory tensor, which contains the global and local spatial and spectral information of the HSI, is decomposed by the Tensor singular value decomposition (t-SVD) to explore its low-rank intrinsic features. Finally, the efficacy of the extracted features is evaluated using the accuracy of image classification with a support vector machine (SVM) classifier. Experimental results on three publicly available datasets have fully demonstrated the superiority of the proposed TensorSSA over a few state-of-the-art 2D/3D feature extraction and deep learning algorithms, even with a limited number of training samples

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    Sparse Coding Based Feature Representation Method for Remote Sensing Images

    Get PDF
    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further verify the power of the new feature representation method, we applied it to a pan-sharpened image to detect seafloor scars in shallow waters. Propeller scars are formed when boat propellers strike and break apart seagrass beds, resulting in habitat loss. We developed a robust identification system by incorporating morphological filters to detect and map the scars. Our results showed that the proposed method can be implemented on a regular basis to monitor changes in habitat characteristics of coastal waters

    Automatic detection of geospatial objects using multiple hierarchical segmentations

    Get PDF
    Cataloged from PDF version of article.The object-based analysis of remotely sensed imagery provides valuable spatial and structural information that is complementary to pixel-based spectral information in classi- fication. In this paper, we present novel methods for automatic object detection in high-resolution images by combining spectral information with structural information exploited by using image segmentation. The proposed segmentation algorithm uses morphological operations applied to individual spectral bands using structuring elements in increasing sizes. These operations produce a set of connected components forming a hierarchy of segments for each band. A generic algorithm is designed to select meaningful segments that maximize a measure consisting of spectral homogeneity and neighborhood connectivity. Given the observation that different structures appear more clearly at different scales in different spectral bands, we describe a new algorithm for unsupervised grouping of candidate segments belonging to multiple hierarchical segmentations to find coherent sets of segments that correspond to actual objects. The segments are modeled by using their spectral and textural content, and the grouping problem is solved by using the probabilistic latent semantic analysis algorithm that builds object models by learning the object-conditional probability distributions. The automatic labeling of a segment is done by computing the similarity of its feature distribution to the distribution of the learned object models using the Kullback–Leibler divergence. The performances of the unsupervised segmentation and object detection algorithms are evaluated qualitatively and quantitatively using three different data sets with comparative experiments, and the results show that the proposed methods are able to automatically detect, group, and label segments belonging to the same object classes

    Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery

    Get PDF
    Deep-learning frameworks have made remarkable progress thanks to the creation of large annotated datasets such as ImageNet, which has over one million training images. Although this works well for color (RGB) imagery, labeled datasets for other sensor modalities (e.g., multispectral and hyperspectral) are minuscule in comparison. This is because annotated datasets are expensive and man-power intensive to complete; and since this would be impractical to accomplish for each type of sensor, current state-of-the-art approaches in computer vision are not ideal for remote sensing problems. The shortage of annotated remote sensing imagery beyond the visual spectrum has forced researchers to embrace unsupervised feature extracting frameworks. These features are learned on a per-image basis, so they tend to not generalize well across other datasets. In this dissertation, we propose three new strategies for learning feature extracting frameworks with only a small quantity of annotated image data; including 1) self-taught feature learning, 2) domain adaptation with synthetic imagery, and 3) semi-supervised classification. ``Self-taught\u27\u27 feature learning frameworks are trained with large quantities of unlabeled imagery, and then these networks extract spatial-spectral features from annotated data for supervised classification. Synthetic remote sensing imagery can be used to boot-strap a deep convolutional neural network, and then we can fine-tune the network with real imagery. Semi-supervised classifiers prevent overfitting by jointly optimizing the supervised classification task along side one or more unsupervised learning tasks (i.e., reconstruction). Although obtaining large quantities of annotated image data would be ideal, our work shows that we can make due with less cost-prohibitive methods which are more practical to the end-user
    • …
    corecore