67,626 research outputs found

    Fine-grained Graph Learning for Multi-view Subspace Clustering

    Full text link
    Multi-view subspace clustering (MSC) is a popular unsupervised method by integrating heterogeneous information to reveal the intrinsic clustering structure hidden across views. Usually, MSC methods use graphs (or affinity matrices) fusion to learn a common structure, and further apply graph-based approaches to clustering. Despite progress, most of the methods do not establish the connection between graph learning and clustering. Meanwhile, conventional graph fusion strategies assign coarse-grained weights to combine multi-graph, ignoring the importance of local structure. In this paper, we propose a fine-grained graph learning framework for multi-view subspace clustering (FGL-MSC) to address these issues. To utilize the multi-view information sufficiently, we design a specific graph learning method by introducing graph regularization and local structure fusion pattern. The main challenge is how to optimize the fine-grained fusion weights while generating the learned graph that fits the clustering task, thus making the clustering representation meaningful and competitive. Accordingly, an iterative algorithm is proposed to solve the above joint optimization problem, which obtains the learned graph, the clustering representation, and the fusion weights simultaneously. Extensive experiments on eight real-world datasets show that the proposed framework has comparable performance to the state-of-the-art methods

    An oil painters recognition method based on cluster multiple kernel learning algorithm

    Get PDF
    A lot of image processing research works focus on natural images, such as in classification, clustering, and the research on the recognition of artworks (such as oil paintings), from feature extraction to classifier design, is relatively few. This paper focuses on oil painter recognition and tries to find the mobile application to recognize the painter. This paper proposes a cluster multiple kernel learning algorithm, which extracts oil painting features from three aspects: color, texture, and spatial layout, and generates multiple candidate kernels with different kernel functions. With the results of clustering numerous candidate kernels, we selected the sub-kernels with better classification performance, and use the traditional multiple kernel learning algorithm to carry out the multi-feature fusion classification. The algorithm achieves a better result on the Painting91 than using traditional multiple kernel learning directly

    Multi-sensor multi-target tracking using domain knowledge and clustering

    Get PDF
    This paper proposes a novel joint multi-target tracking and track maintenance algorithm over a sensor network. Each sensor runs a local joint probabilistic data association (JPDA) filter using only its own measurements. Unlike the original JPDA approach, the proposed local filter utilises the detection amplitude as domain knowledge to improve the estimation accuracy. In the fusion stage, the DBSCAN clustering in conjunction with statistical test is proposed to group all local tracks into several clusters. Each generated cluster represents the local tracks that are from the same target source and the global estimation of each cluster is obtained by the generalized covariance intersection (GCI) algorithm. Extensive simulation results clearly confirms the effectiveness of the proposed multisensor multi-target tracking algorithm

    Extremal optimization for sensor report pre-processing

    Full text link
    We describe the recently introduced extremal optimization algorithm and apply it to target detection and association problems arising in pre-processing for multi-target tracking. Here we consider the problem of pre-processing for multiple target tracking when the number of sensor reports received is very large and arrives in large bursts. In this case, it is sometimes necessary to pre-process reports before sending them to tracking modules in the fusion system. The pre-processing step associates reports to known tracks (or initializes new tracks for reports on objects that have not been seen before). It could also be used as a pre-process step before clustering, e.g., in order to test how many clusters to use. The pre-processing is done by solving an approximate version of the original problem. In this approximation, not all pair-wise conflicts are calculated. The approximation relies on knowing how many such pair-wise conflicts that are necessary to compute. To determine this, results on phase-transitions occurring when coloring (or clustering) large random instances of a particular graph ensemble are used.Comment: 10 page
    corecore