7,653 research outputs found

    Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

    Full text link
    PCA is one of the most widely used dimension reduction techniques. A related easier problem is "subspace learning" or "subspace estimation". Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning or robust PCA (RPCA). For long data sequences, if one tries to use a single lower dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of robust subspace learning and tracking. In particular solutions for three problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition (S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an entire data vector is either an outlier or an inlier. The S+LR formulation instead assumes that outliers occur on only a few data vector indices and hence are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Supervised Classification: Quite a Brief Overview

    Full text link
    The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects. Classifiers are the tools that implement the actual functional mapping from these measurements---also called features or inputs---to the so-called class label---or output. The fields of pattern recognition and machine learning study ways of constructing such classifiers. The main idea behind supervised methods is that of learning from examples: given a number of example input-output relations, to what extent can the general mapping be learned that takes any new and unseen feature vector to its correct class? This chapter provides a basic introduction to the underlying ideas of how to come to a supervised classification problem. In addition, it provides an overview of some specific classification techniques, delves into the issues of object representation and classifier evaluation, and (very) briefly covers some variations on the basic supervised classification task that may also be of interest to the practitioner

    Lettuce growth stage identification based on phytomorphological variations using coupled color superpixels and multifold watershed transformation

    Get PDF
    Identifying the plant's developmental growth stages from seed leaf is crucial to understand plant science and cultivation management deeply. An efficient vision-based system for plant growth monitoring entails optimum segmentation and classification algorithms. This study presents coupled color-based superpixels and multifold watershed transformation in segmenting lettuce plant from complicated background taken from smart farm aquaponic system, and machine learning models used to classify lettuce plant growth as vegetative, head development and for harvest based on phytomorphological profile. Morphological computations were employed by feature extraction of the number of leaves, biomass area and perimeter, convex area, convex hull area and perimeter, major and minor axis lengths of the major axis length the dominant leaf, and length of plant skeleton. Phytomorphological variations of biomass compactness, convexity, solidity, plant skeleton, and perimeter ratio were included as inputs of the classification network. The extracted Lab color space information from the training image set undergoes superpixels overlaying with 1,000 superpixel regions employing K-means clustering on each pixel class. Six-level watershed transformation with distance transformation and minima imposition was employed to segment the lettuce plant from other pixel objects. The accuracy of correctly classifying the vegetative, head development, and harvest growth stages are 88.89%, 86.67%, and 79.63%, respectively. The experiment shows that the test accuracy rates of machine learning models were recorded as 60% for LDA, 85% for ANN, and 88.33% for QSVM. Comparative analysis showed that QSVM bested the performance of optimized LDA and ANN in classifying lettuce growth stages. This research developed a seamless model in segmenting vegetation pixels, and predicting lettuce growth stage is essential for plant computational phenotyping and agricultural practice optimization
    corecore