99 research outputs found

    Sketch-based subspace clustering of hyperspectral images

    Get PDF
    Sparse subspace clustering (SSC) techniques provide the state-of-the-art in clustering of hyperspectral images (HSIs). However, their computational complexity hinders their applicability to large-scale HSIs. In this paper, we propose a large-scale SSC-based method, which can effectively process large HSIs while also achieving improved clustering accuracy compared to the current SSC methods. We build our approach based on an emerging concept of sketched subspace clustering, which was to our knowledge not explored at all in hyperspectral imaging yet. Moreover, there are only scarce results on any large-scale SSC approaches for HSI. We show that a direct application of sketched SSC does not provide a satisfactory performance on HSIs but it does provide an excellent basis for an effective and elegant method that we build by extending this approach with a spatial prior and deriving the corresponding solver. In particular, a random matrix constructed by the Johnson-Lindenstrauss transform is first used to sketch the self-representation dictionary as a compact dictionary, which significantly reduces the number of sparse coefficients to be solved, thereby reducing the overall complexity. In order to alleviate the effect of noise and within-class spectral variations of HSIs, we employ a total variation constraint on the coefficient matrix, which accounts for the spatial dependencies among the neighbouring pixels. We derive an efficient solver for the resulting optimization problem, and we theoretically prove its convergence property under mild conditions. The experimental results on real HSIs show a notable improvement in comparison with the traditional SSC-based methods and the state-of-the-art methods for clustering of large-scale images

    Simplified Energy Landscape for Modularity Using Total Variation

    Get PDF
    Networks capture pairwise interactions between entities and are frequently used in applications such as social networks, food networks, and protein interaction networks, to name a few. Communities, cohesive groups of nodes, often form in these applications, and identifying them gives insight into the overall organization of the network. One common quality function used to identify community structure is modularity. In Hu et al. [SIAM J. App. Math., 73(6), 2013], it was shown that modularity optimization is equivalent to minimizing a particular nonconvex total variation (TV) based functional over a discrete domain. They solve this problem, assuming the number of communities is known, using a Merriman, Bence, Osher (MBO) scheme. We show that modularity optimization is equivalent to minimizing a convex TV-based functional over a discrete domain, again, assuming the number of communities is known. Furthermore, we show that modularity has no convex relaxation satisfying certain natural conditions. We therefore, find a manageable non-convex approximation using a Ginzburg Landau functional, which provably converges to the correct energy in the limit of a certain parameter. We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et al. and which is 7 times faster at solving the associated diffusion equation due to the fact that the underlying discretization is unconditionally stable. Our numerical tests include a hyperspectral video whose associated graph has 2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper of Hu et al.Comment: 25 pages, 3 figures, 3 tables, submitted to SIAM J. App. Mat

    Sparse Representation of High Dimensional Data for Classification

    Get PDF
    In this thesis we propose the use of sparse Principal Component Analysis (PCA) for representing high dimensional data for classification. Sparse transformation reduces the data volume/dimensionality without loss of critical information, so that it can be processed efficiently and assimilated by a human. We obtained sparse representation of high dimensional dataset using Sparse Principal Component Analysis (SPCA) and Direct formulation of Sparse Principal Component Analysis (DSPCA). Later we performed classification using k Nearest Neighbor (kNN) Method and compared its result with regular PCA. The experiments were performed on hyperspectral data and various datasets obtained from University of California, Irvine (UCI) machine learning dataset repository. The results suggest that sparse data representation is desirable because sparse representation enhances interpretation. It also improves classification performance with certain number of features and in most of the cases classification performance is similar to regular PCA

    K-Deep Simplex: Deep Manifold Learning via Local Dictionaries

    Full text link
    We propose K-Deep Simplex (KDS), a unified optimization framework for nonlinear dimensionality reduction that combines the strengths of manifold learning and sparse dictionary learning. Our approach learns local dictionaries that represent a data point with reconstruction coefficients supported on the probability simplex. The dictionaries are learned using algorithm unrolling, an increasingly popular technique for structured deep learning. KDS enjoys tremendous computational advantages over related approaches and is both interpretable and flexible. In particular, KDS is quasilinear in the number of data points with scaling that depends on intrinsic geometric properties of the data. We apply KDS to the unsupervised clustering problem and prove theoretical performance guarantees. Experiments show that the algorithm is highly efficient and performs competitively on synthetic and real data sets.Comment: 14 pages, 6 figure

    Low-Rank Matrices on Graphs: Generalized Recovery & Applications

    Get PDF
    Many real world datasets subsume a linear or non-linear low-rank structure in a very low-dimensional space. Unfortunately, one often has very little or no information about the geometry of the space, resulting in a highly under-determined recovery problem. Under certain circumstances, state-of-the-art algorithms provide an exact recovery for linear low-rank structures but at the expense of highly inscalable algorithms which use nuclear norm. However, the case of non-linear structures remains unresolved. We revisit the problem of low-rank recovery from a totally different perspective, involving graphs which encode pairwise similarity between the data samples and features. Surprisingly, our analysis confirms that it is possible to recover many approximate linear and non-linear low-rank structures with recovery guarantees with a set of highly scalable and efficient algorithms. We call such data matrices as \textit{Low-Rank matrices on graphs} and show that many real world datasets satisfy this assumption approximately due to underlying stationarity. Our detailed theoretical and experimental analysis unveils the power of the simple, yet very novel recovery framework \textit{Fast Robust PCA on Graphs
    • …
    corecore