6,930 research outputs found
A Survey on Soft Subspace Clustering
Subspace clustering (SC) is a promising clustering technology to identify
clusters based on their associations with subspaces in high dimensional spaces.
SC can be classified into hard subspace clustering (HSC) and soft subspace
clustering (SSC). While HSC algorithms have been extensively studied and well
accepted by the scientific community, SSC algorithms are relatively new but
gaining more attention in recent years due to better adaptability. In the
paper, a comprehensive survey on existing SSC algorithms and the recent
development are presented. The SSC algorithms are classified systematically
into three main categories, namely, conventional SSC (CSSC), independent SSC
(ISSC) and extended SSC (XSSC). The characteristics of these algorithms are
highlighted and the potential future development of SSC is also discussed.Comment: This paper has been published in Information Sciences Journal in 201
Protein docking refinement by convex underestimation in the low-dimensional subspace of encounter complexes
We propose a novel stochastic global optimization algorithm with applications to the refinement stage of protein docking prediction methods. Our approach can process conformations sampled from multiple clusters, each roughly corresponding to a different binding energy funnel. These clusters are obtained using a density-based clustering method. In each cluster, we identify a smooth “permissive” subspace which avoids high-energy barriers and then underestimate the binding energy function using general convex polynomials in this subspace. We use the underestimator to bias sampling towards its global minimum. Sampling and subspace underestimation are repeated several times and the conformations sampled at the last iteration form a refined ensemble. We report computational results on a comprehensive benchmark of 224 protein complexes, establishing that our refined ensemble significantly improves the quality of the conformations of the original set given to the algorithm. We also devise a method to enhance the ensemble from which near-native models are selected.Published versio
Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty
With the rapid expansion of mobile phone networks in developing countries,
large-scale graph machine learning has gained sudden relevance in the study of
global poverty. Recent applications range from humanitarian response and
poverty estimation to urban planning and epidemic containment. Yet the vast
majority of computational tools and algorithms used in these applications do
not account for the multi-view nature of social networks: people are related in
myriad ways, but most graph learning models treat relations as binary. In this
paper, we develop a graph-based convolutional network for learning on
multi-view networks. We show that this method outperforms state-of-the-art
semi-supervised learning algorithms on three different prediction tasks using
mobile phone datasets from three different developing countries. We also show
that, while designed specifically for use in poverty research, the algorithm
also outperforms existing benchmarks on a broader set of learning tasks on
multi-view networks, including node labelling in citation networks
Stochastic Optimization for Deep CCA via Nonlinear Orthogonal Iterations
Deep CCA is a recently proposed deep neural network extension to the
traditional canonical correlation analysis (CCA), and has been successful for
multi-view representation learning in several domains. However, stochastic
optimization of the deep CCA objective is not straightforward, because it does
not decouple over training examples. Previous optimizers for deep CCA are
either batch-based algorithms or stochastic optimization using large
minibatches, which can have high memory consumption. In this paper, we tackle
the problem of stochastic optimization for deep CCA with small minibatches,
based on an iterative solution to the CCA objective, and show that we can
achieve as good performance as previous optimizers and thus alleviate the
memory requirement.Comment: in 2015 Annual Allerton Conference on Communication, Control and
Computin
Scalable Image Retrieval by Sparse Product Quantization
Fast Approximate Nearest Neighbor (ANN) search technique for high-dimensional
feature indexing and retrieval is the crux of large-scale image retrieval. A
recent promising technique is Product Quantization, which attempts to index
high-dimensional image features by decomposing the feature space into a
Cartesian product of low dimensional subspaces and quantizing each of them
separately. Despite the promising results reported, their quantization approach
follows the typical hard assignment of traditional quantization methods, which
may result in large quantization errors and thus inferior search performance.
Unlike the existing approaches, in this paper, we propose a novel approach
called Sparse Product Quantization (SPQ) to encoding the high-dimensional
feature vectors into sparse representation. We optimize the sparse
representations of the feature vectors by minimizing their quantization errors,
making the resulting representation is essentially close to the original data
in practice. Experiments show that the proposed SPQ technique is not only able
to compress data, but also an effective encoding technique. We obtain
state-of-the-art results for ANN search on four public image datasets and the
promising results of content-based image retrieval further validate the
efficacy of our proposed method.Comment: 12 page
Median K-flats for hybrid linear modeling with many outliers
We describe the Median K-Flats (MKF) algorithm, a simple online method for
hybrid linear modeling, i.e., for approximating data by a mixture of flats.
This algorithm simultaneously partitions the data into clusters while finding
their corresponding best approximating l1 d-flats, so that the cumulative l1
error is minimized. The current implementation restricts d-flats to be
d-dimensional linear subspaces. It requires a negligible amount of storage, and
its complexity, when modeling data consisting of N points in D-dimensional
Euclidean space with K d-dimensional linear subspaces, is of order O(n K d D+n
d^2 D), where n is the number of iterations required for convergence
(empirically on the order of 10^4). Since it is an online algorithm, data can
be supplied to it incrementally and it can incrementally produce the
corresponding output. The performance of the algorithm is carefully evaluated
using synthetic and real data
- …