5 research outputs found

    Integration of visual and depth information for vehicle detection

    Get PDF
    In this work an object class recognition method is presented. The method uses local image features and follows the part based detection approach. It fuses intensity and depth information in a probabilistic framework. The depth of each local feature is used to weigh the probability of finding the object at a given scale. To train the system for an object class only a database of annotated with bounding boxes images is required, thus automatizing the extension of the system to different object classes. We apply our method to the problem of detecting vehicles from a moving platform. The experiments with a dataset of stereo images in an urban environment show a significant improvement in performance when using both information modalities

    BOLD Features to Detect Texture-less Objects

    Full text link
    Object detection in images withstanding significant clut-ter and occlusion is still a challenging task whenever the object surface is characterized by poor informative content. We propose to tackle this problem by a compact and dis-tinctive representation of groups of neighboring line seg-ments aggregated over limited spatial supports and invari-ant to rotation, translation and scale changes. Peculiarly, our proposal allows for leveraging on the inherent strengths of descriptor-based approaches, i.e. robustness to occlu-sion and clutter and scalability with respect to the size of the model library, also when dealing with scarcely textured objects. 1

    Kernel and Classifier Level Fusion for Image Classification.

    Get PDF
    Automatic understanding of visual information is one of the main requirements for a complete artificial intelligence system and an essential component of autonomous robots. State-of-the-art image recognition approaches are based on different local descriptors, each capturing some properties of the image such as intensity, color and texture. Each set of local descriptors is represented by a codebook and gives rise to a separate feature channel. For classification the feature channels are combined by using multiple kernel learning (MKL), early fusion or classifier level fusion approaches. Due to the importance of complementary information in fusion techniques, there is an increasing demand for diverse feature channels. The first part of the thesis focuses on the ways to encode information from images that is complementary to the state-of-the-art local features. To address this issue we present a novel image representation which can encode the structure of an object and propose three descriptors based on this representation. In the state-of-the-art recognition system the kernels are often computed independently of each other and thus may be highly informative yet redundant. Proper selection and fusion of the kernels is, therefore, crucial to maximize the performance and to address the efficiency issues in visual recognition applications. We address this issue in second part of the thesis where, we propose novel techniques to fuse feature channels for object and pattern recognition. We present an extensive evaluation of the fusion methods on four object recognition datasets and achieve state-of-the-art results on all of them. We also present results on four bioinformatics datasets to demonstrate that the proposed fusion methods work for a variety of pattern recognition problems, provided that we have multiple feature channels

    Feature Pairs Connected by Lines for Object Recognition

    No full text
    In this paper we exploit image edges and segmentation maps to build features for object category recognition. We build a parametric line based image approximation to identify the dominant edge structures. Line ends are used as features described by histograms of gradient orientations. We then form descriptors based on connected line ends to incorporate weak topological constraints which improve their discriminative power. Using point pairs connected by an edge assures higher repeatability than a random pair of points or edges. The results are compared with state-of-the-art, and show significant improvement on challenging recognition benchmark Pascal VOC 2007. Kernel based fusion is performed to emphasize the complementary nature of our descriptors with respect to the state-of-the-art features
    corecore