144 research outputs found

    Multidimensional Scaling on Multiple Input Distance Matrices

    Full text link
    Multidimensional Scaling (MDS) is a classic technique that seeks vectorial representations for data points, given the pairwise distances between them. However, in recent years, data are usually collected from diverse sources or have multiple heterogeneous representations. How to do multidimensional scaling on multiple input distance matrices is still unsolved to our best knowledge. In this paper, we first define this new task formally. Then, we propose a new algorithm called Multi-View Multidimensional Scaling (MVMDS) by considering each input distance matrix as one view. Our algorithm is able to learn the weights of views (i.e., distance matrices) automatically by exploring the consensus information and complementary nature of views. Experimental results on synthetic as well as real datasets demonstrate the effectiveness of MVMDS. We hope that our work encourages a wider consideration in many domains where MDS is needed

    3D Well-Composed Pictures

    No full text
    By a segmented image, we mean a digital image in which each point is assigned a unique label that indicates the object to which it belongs. By the foreground (objects) of a segmented image, we mean the objects whose properties we want to analyze, and by the background all the other objects of a digital image. If one adjacency relation is used for the foreground of a 3D segmented image (e.g., 6-adjacency) and a di#erent one for the background (e.g., 26-adjacency), then interchanging the foreground and the background can change the connected components of the digital picture. Hence, the choice of foreground and of background is critical for the results of the subsequent analysis (like object grouping), especially in cases where it is not clear at the beginning of the analysis what constitutes the foreground and what the background, since this choice immediately determines the connected components of the digital picture. A specia

    ABSTRACT New EM Derived from Kullback-Leibler Divergence

    No full text
    We introduce a new EM framework in which it is possible not only to optimize the model parameters but also the number of model components. A key feature of our approach is that we use nonparametric density estimation to improve parametric density estimation in the EM framework. While the classical EM algorithm estimates model parameters empirically using the data points themselves, we estimate them using nonparametric density estimates. There exist many possible applications that require optimal adjustment of model components. We present experimental results in two domains. One is polygonal approximation of laser range data, which is an active research topic in robot navigation. The other is grouping of edge pixels to contour boundaries, which still belongs to unsolved problems in computer vision

    3D Well-Composed Pictures

    No full text
    • …
    corecore