10 research outputs found

    No Spare Parts: Sharing Part Detectors for Image Categorization

    Get PDF
    This work aims for image categorization using a representation of distinctive parts. Different from existing part-based work, we argue that parts are naturally shared between image categories and should be modeled as such. We motivate our approach with a quantitative and qualitative analysis by backtracking where selected parts come from. Our analysis shows that in addition to the category parts defining the class, the parts coming from the background context and parts from other image categories improve categorization performance. Part selection should not be done separately for each category, but instead be shared and optimized over all categories. To incorporate part sharing between categories, we present an algorithm based on AdaBoost to jointly optimize part sharing and selection, as well as fusion with the global image representation. We achieve results competitive to the state-of-the-art on object, scene, and action categories, further improving over deep convolutional neural networks

    Deep feature fusion through adaptive discriminative metric learning for scene recognition

    No full text
    With the development of deep learning techniques, fusion of deep features has demonstrated the powerful capability to improve recognition performance. However, most researchers directly fuse different deep feature vectors without considering the complementary and consistent information among them. In this paper, from the viewpoint of metric learning, we propose a novel deep feature fusion method, called deep feature fusion through adaptive discriminative metric learning (DFF-ADML), to explore the complementary and consistent information for scene recognition. Concretely, we formulate an adaptive discriminative metric learning problem, which not only fully exploits discriminative information from each deep feature vector, but also adaptively fuses complementary information from different deep feature vectors. Besides, we map different deep feature vectors of the same image into a common space by different linear transformations, such that the consistent information can be preserved as much as possible. Moreover, DFF-ADML is extended to a kernelized version. Extensive experiments on both natural scene and remote sensing scene datasets demonstrate the superiority and robustness of the proposed deep feature fusion method

    Learning part-based spatial models for laser-vision-based room categorization

    Get PDF
    Room categorization, that is, recognizing the functionality of a never before seen room, is a crucial capability for a household mobile robot. We present a new approach for room categorization that is based on two-dimensional laser range data. The method is based on a novel spatial model consisting of mid-level parts that are built on top of a low-level part-based representation. The approach is then fused with a vision-based method for room categorization, which is also based on a spatial model consisting of mid-level visual parts. In addition, we propose a new discriminative dictionary learning technique that is applied for part-dictionary selection in both laser-based and vision-based modalities. Finally, we present a comparative analysis between laser-based, vision-based, and laser-vision-fusion-based approaches in a uniform part-based framework, which is evaluated on a large dataset with several categories of rooms from domestic environments. </jats:p
    corecore