11 research outputs found

    Subspace Clustering via Optimal Direction Search

    Full text link
    This letter presents a new spectral-clustering-based approach to the subspace clustering problem. Underpinning the proposed method is a convex program for optimal direction search, which for each data point d finds an optimal direction in the span of the data that has minimum projection on the other data points and non-vanishing projection on d. The obtained directions are subsequently leveraged to identify a neighborhood set for each data point. An alternating direction method of multipliers framework is provided to efficiently solve for the optimal directions. The proposed method is shown to notably outperform the existing subspace clustering methods, particularly for unwieldy scenarios involving high levels of noise and close subspaces, and yields the state-of-the-art results for the problem of face clustering using subspace segmentation

    Innovation Pursuit: A New Approach to Subspace Clustering

    Full text link
    In subspace clustering, a group of data points belonging to a union of subspaces are assigned membership to their respective subspaces. This paper presents a new approach dubbed Innovation Pursuit (iPursuit) to the problem of subspace clustering using a new geometrical idea whereby subspaces are identified based on their relative novelties. We present two frameworks in which the idea of innovation pursuit is used to distinguish the subspaces. Underlying the first framework is an iterative method that finds the subspaces consecutively by solving a series of simple linear optimization problems, each searching for a direction of innovation in the span of the data potentially orthogonal to all subspaces except for the one to be identified in one step of the algorithm. A detailed mathematical analysis is provided establishing sufficient conditions for iPursuit to correctly cluster the data. The proposed approach can provably yield exact clustering even when the subspaces have significant intersections. It is shown that the complexity of the iterative approach scales only linearly in the number of data points and subspaces, and quadratically in the dimension of the subspaces. The second framework integrates iPursuit with spectral clustering to yield a new variant of spectral-clustering-based algorithms. The numerical simulations with both real and synthetic data demonstrate that iPursuit can often outperform the state-of-the-art subspace clustering algorithms, more so for subspaces with significant intersections, and that it significantly improves the state-of-the-art result for subspace-segmentation-based face clustering

    Sparse Subspace Clustering: Algorithm, Theory, and Applications

    Full text link
    In many real-world problems, we are dealing with collections of high-dimensional data, such as images, videos, text and web documents, DNA microarray data, and more. Often, high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories the data belongs to. In this paper, we propose and study an algorithm, called Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of subspaces and the distribution of data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm can be solved efficiently and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal with data nuisances, such as noise, sparse outlying entries, and missing entries, directly by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering

    A geometric analysis of subspace clustering with outliers

    Full text link
    This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower-dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 (2009) 2790-2797. IEEE], which significantly broadens the range of problems where it is provably effective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the effectiveness of these methods.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1034 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning Human Poses from Monocular Images

    Get PDF
    In this research, we mainly focus on the problem of estimating the 2D human pose from a monocular image and reconstructing the 3D human pose based on the 2D human pose. Here a 3D pose is the locations of the human joints in the 3D space and a 2D pose is the projection of a 3D pose on an image. Unlike many previous works that explicitly use hand-crafted physiological models, both our 2D pose estimation and 3D pose reconstruction approaches implicitly learn the structure of human body from human pose data. This 3D pose reconstruction is an ill-posed problem without considering any prior knowledge. In this research, we propose a new approach, namely Pose Locality Constrained Representation (PLCR), to constrain the search space for the underlying 3D human pose and use it to improve 3D human pose reconstruction. In this approach, an over-complete pose dictionary is constructed by hierarchically clustering the 3D pose space into many subspaces. Then PLCR utilizes the structure of the over-complete dictionary to constrain the 3D pose solution to a set of highly-related subspaces. Finally, PLCR is combined into the matching-pursuit based algorithm for 3D human-pose reconstruction. The 2D human pose used in 3D pose reconstruction can be manually annotated or automatically estimated from a single image. In this research, we develop a new learning-based 2D human pose estimation approach based on a Dual-Source Deep Convolutional Neural Networks (DS-CNN). The proposed DS-CNN model learns the appearance of each local body part and the relations between parts simultaneously, while most of existing approaches consider them as two separate steps. In our experiments, the proposed DS-CNN model produces superior or comparable performance against the state-of-the-art 2D human-pose estimation approaches based on pose priors learned from hand-crafted models or holistic perspectives. Finally, we use our 2D human pose estimation approach to recognize human attributes by utilizing the strong correspondence between human attributes and human body parts. Then we probe if and when the CNN can find such correspondence by itself on human attribute recognition and bird species recognition. We find that there is direct correlation between the recognition accuracy and the correctness of the correspondence that the CNN finds

    Improved image analysis by maximised statistical use of geometry-shape constraints

    Get PDF
    Identifying the underlying models in a set of data points contaminated by noise and outliers, leads to a highly complex multi-model fitting problem. This problem can be posed as a clustering problem by the construction of higher order affinities between data points into a hypergraph, which can then be partitioned using spectral clustering. Calculating the weights of all hyperedges is computationally expensive. Hence an approximation is required. In this thesis, the aim is to find an efficient and effective approximation that produces an excellent segmentation outcome. Firstly, the effect of hyperedge sizes on the speed and accuracy of the clustering is investigated. Almost all previous work on hypergraph clustering in computer vision, has considered the smallest possible hyperedge size, due to the lack of research into the potential benefits of large hyperedges and effective algorithms to generate them. In this thesis, it is shown that large hyperedges are better from both theoretical and empirical standpoints. The efficiency of this technique on various higher-order grouping problems is investigated. In particular, we show that our approach improves the accuracy and efficiency of motion segmentation from dense, long-term, trajectories. A shortcoming of the above approach is that the probability of a generated sample being impure increases as the size of the sample increases. To address this issue, a novel guided sampling strategy for large hyperedges, based on the concept of minimizing the largest residual, is also included. It is proposed to guide each sample by optimizing over a kk\textsuperscript{th} order statistics based cost function. Samples are generated using a greedy algorithm coupled with a data sub-sampling strategy. The experimental analysis shows that this proposed step is both accurate and computationally efficient compared to state-of-the-art robust multi-model fitting techniques. However, the optimization method for guiding samples involves hard-to-tune parameters. Thus a sampling method is eventually developed that significantly facilitates solving the segmentation problem using a new form of the Markov-Chain-Monte-Carlo (MCMC) method to efficiently sample from hyperedge distribution. To sample from the above distribution effectively, the proposed Markov Chain includes new types of long and short jumps to perform exploration and exploitation of all structures. Unlike common sampling methods, this method does not require any specific prior knowledge about the distribution of models. The output set of samples leads to a clustering solution by which the final model parameters for each segment are obtained. The overall method competes favorably with the state-of-the-art both in terms of computation power and segmentation accuracy
    corecore