12 research outputs found

    Fragmentary multi-instance classification

    No full text
    Abstract Multi-instance learning (MIL) has been extensively applied to various real tasks involving objects with bags of instances, such as in drugs and images. Previous studies on MIL assume that data are entirely complete. However, in many real tasks, the instance is fragmentary. In this article, we present probably the first study on multi-instance classification with fragmentary data. In our proposed framework, called fragmentary multi-instance classification (FIC), the fragmentary data are completed and the multi-instance classifier is learned jointly. To facilitate the integration between the completion and classifier learning, FIC establishes the weighting mechanism to measure the importance levels of different instances. To validate the compatibility of our framework, four typical MIL methods, including multi-instance support vector machine (MI-SVM), expectation maximization diverse density (EM-DD), citation-K nearest neighbors (Citation-KNNs), and MIL with discriminative bag mapping (MILDM), are embedded into the framework to obtain the corresponding FIC versions. As an illustration, an efficient solving algorithm is developed to address the problem for MI-SVM, together with the proof of convergence behavior. The experimental results on various types of real-world datasets demonstrate the effectiveness

    Robust auto-weighted multi-view subspace clustering with common subspace representation matrix

    No full text
    <div><p>In many computer vision and machine learning applications, the data sets distribute on certain low-dimensional subspaces. Subspace clustering is a powerful technology to find the underlying subspaces and cluster data points correctly. However, traditional subspace clustering methods can only be applied on data from one source, and how to extend these methods and enable the extensions to combine information from various data sources has become a hot area of research. Previous multi-view subspace methods aim to learn multiple subspace representation matrices simultaneously and these learning task for different views are treated equally. After obtaining representation matrices, they stack up the learned representation matrices as the common underlying subspace structure. However, for many problems, the importance of sources and the importance of features in one source both can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel method called Robust Auto-weighted Multi-view Subspace Clustering (RAMSC). In our method, the weight for both the sources and features can be learned automatically via utilizing a novel trick and introducing a sparse norm. More importantly, the objective of our method is a common representation matrix which directly reflects the common underlying subspace structure. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergency. Extensive experimental results on five benchmark multi-view datasets well demonstrate that the proposed method consistently outperforms the state-of-the-art methods.</p></div

    Clustering results of different methods on Caltech101-7 data set. (mean(± std)).

    No full text
    <p>Clustering results of different methods on Caltech101-7 data set. (mean(± std)).</p

    Details of the multiview datasets used in our experiments (view type (dimensionality)).

    No full text
    <p>Details of the multiview datasets used in our experiments (view type (dimensionality)).</p

    Clustering results of different methods on MSRC-v1 data set. (mean(± std)).

    No full text
    <p>(On the following five result tables, two best results of each metrics are bold).</p
    corecore