356 research outputs found

    Incremental Linear Discriminant analysis for classification of Data Streams

    Get PDF
    This paper presents a constructive method for deriving an updated discriminant eigenspace for classification when bursts of data that contains new classes is being added to an initial discriminant eigenspace in the form of random chunks. Basically, we propose an incremental linear discriminant analysis (ILDA) in its two forms: a sequential ILDA and a Chunk ILDA. In experiments, we have tested ILDA using datasets with a small number of classes and small-dimensional features, as well as datasets with a large number of classes and large-dimensional features. We have compared the proposed ILDA against the traditional batch LDA in terms of discriminability, execution time and memory usage with the increasing volume of data addition. The results show that the proposed ILDA can effectively evolve a discriminant eigenspace over a fast and large data stream, and extract features with superior discriminability in classification, when compared with other methods. © 2005 IEEE

    Face recognition via incremental 2DPCA

    Full text link
    Recently, the Two-Dimensional Principal Component Analysis (2DPCA) model is proposed and proved to be an efficient approach for face recognition. In this paper, we will investigate the incremental 2DPCA and develop a new constructive method for incrementally adding observation to the existing eigen-space model. An explicit formula for incremental learning is derived. In order to illustrate the effectiveness of the proposed approach, we performed some typical experiments and show that we can only keep the eigen-space of previous images and discard the raw images in the face recognition process. Furthermore, this proposed incremental approach is faster when compared to the batch method (2DPCD) and the recognition rate and reconstruction accuracy are as good as those obtained by the batch method.<br /

    Data reduction for spectral clustering to analyze high throughput flow cytometry data

    Get PDF
    Background: Recent biological discoveries have shown that clustering large datasets is essential for better understanding biology in many areas. Spectral clustering in particular has proven to be a powerful tool amenable for many applications. However, it cannot be directly applied to large datasets due to time and memory limitations. To address this issue, we have modified spectral clustering by adding an information preserving sampling procedure and applying a post-processing stage. We call this entire algorithm SamSPECTRAL.Results: We tested our algorithm on flow cytometry data as an example of large, multidimensional data containing potentially hundreds of thousands of data points (i.e., events in flow cytometry, typically corresponding to cells). Compared to two state of the art model-based flow cytometry clustering methods, SamSPECTRAL demonstrates significant advantages in proper identification of populations with non-elliptical shapes, low density populations close to dense ones, minor subpopulations of a major population and rare populations.Conclusions: This work is the first successful attempt to apply spectral methodology on flow cytometry data. An implementation of our algorithm as an R package is freely available through BioConductor. © 2010 Zare et al; licensee BioMed Central Ltd

    Spectrally approximating large graphs with smaller graphs

    Get PDF
    How does coarsening affect the spectrum of a general graph? We provide conditions such that the principal eigenvalues and eigenspaces of a coarsened and original graph Laplacian matrices are close. The achieved approximation is shown to depend on standard graph-theoretic properties, such as the degree and eigenvalue distributions, as well as on the ratio between the coarsened and actual graph sizes. Our results carry implications for learning methods that utilize coarsening. For the particular case of spectral clustering, they imply that coarse eigenvectors can be used to derive good quality assignments even without refinement---this phenomenon was previously observed, but lacked formal justification.Comment: 22 pages, 10 figure
    • …
    corecore