4 research outputs found

    Feature and Region Selection for Visual Learning

    Full text link
    Visual learning problems such as object classification and action recognition are typically approached using extensions of the popular bag-of-words (BoW) model. Despite its great success, it is unclear what visual features the BoW model is learning: Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: (1) Our approach accommodates non-linear additive kernels such as the popular χ2\chi^2 and intersection kernel; (2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; (3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; (4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach

    Incorporation of radius-info can be simple with SimpleMKL

    No full text
    Recent research has shown the benefit of incorporating the radius of the Minimal Enclosing Ball (MEB) of training data into Multiple Kernel Learning (MKL). However, straightforwardly incorporating this radius leads to complex learning structure and considerably increased computation. Moreover, the notorious sensitivity of this radius to outliers can adversely affect MKL. In this paper, instead of directly incorporating the radius of MEB, we incorporate its close relative, the trace of data scattering matrix, to avoid the above problems. By analyzing the characteristics of the resulting optimization, we show that the benefit of incorporating the radius of MEB can be fully retained. More importantly, our algorithm can be effortlessly realized within the existing MKL framework such as SimpleMKL. The mere difference is the way to normalize the basic kernels. Although this kernel normalization is not our invention, our theoretic derivation uncovers why this normalization can achieve better classification performance, which has not appeared in the literature before. As experimentally demonstrated, our method achieves the overall best learning performance in various settings. In another perspective, our work improves SimpleMKL to utilize the information of the radius of MEB in an efficient and practical way. © 2012 Elsevier B.V.Xinwang Liu, Lei Wang, Jianping Yin, Lingqiao Li

    Incorporation of radius-info can be simple with SimpleMKL

    No full text
    Recent research has shown the benefit of incorporating the radius of the Minimal Enclosing Ball (MEB) of training data into Multiple Kernel Learning (MKL). However, straightforwardly incorporating this radius leads to complex learning structure and considerably increased computation. Moreover, the notorious sensitivity of this radius to outliers can adversely affect MKL. In this paper, instead of directly incorporating the radius of MEB, we incorporate its close relative, the trace of data scattering matrix, to avoid the above problems. By analyzing the characteristics of the resulting optimization, we show that the benefit of incorporating the radius of MEB can be fully retained. More importantly, our algorithm can be effortlessly realized within the existing MKL framework such as SimpleMKL. The mere difference is the way to normalize the basic kernels. Although this kernel normalization is not our invention, our theoretic derivation uncovers why this normalization can achieve better classification performance, which has not appeared in the literature before. As experimentally demonstrated, our method achieves the overall best learning performance in various settings. In another perspective, our work improves SimpleMKL to utilize the information of the radius of MEB in an efficient and practical way

    Incorporation of radius-info can be simple with SimpleMKL

    No full text
    Recent research has shown the benefit of incorporating the radius of the Minimal Enclosing Ball (MEB) of training data into Multiple Kernel Learning (MKL). However, straightforwardly incorporating this radius leads to complex learning structure and considerably increased computation. Moreover, the notorious sensitivity of this radius to outliers can adversely affect MKL. In this paper, instead of directly incorporating the radius of MEB, we incorporate its close relative, the trace of data scattering matrix, to avoid the above problems. By analyzing the characteristics of the resulting optimization, we show that the benefit of incorporating the radius of MEB can be fully retained. More importantly, our algorithm can be effortlessly realized within the existing MKL framework such as SimpleMKL. The mere difference is the way to normalize the basic kernels. Although this kernel normalization is not our invention, our theoretic derivation uncovers why this normalization can achieve better classification performance, which has not appeared in the literature before. As experimentally demonstrated, our method achieves the overall best learning performance in various settings. In another perspective, our work improves SimpleMKL to utilize the information of the radius of MEB in an efficient and practical way
    corecore