20,039 research outputs found

    2D and 3D video scene text classification

    Get PDF
    Text detection and recognition is a challenging problem in document analysis due 10 the presence of the unpredictable nature of video texts, such as the variations of orientation, font and size, illumination effects, and even different 20/30 text shadows. In this paper, we propose a novel horizontal and vertical symmetry feature by calculating the gradient direction and the gradient magnitude of each text candidate, which results in Potential Text Candidates (PTCs) after applying the k-means clustering algorithm on the gradient image of each input frame to verify PTC , we explore temporal information of video by proposing an iterative process that continuously verifies the PTCs of the first frame and the successive frames, until the process meets the converging criterion. This outputs Stable Potential Text Candidates (SPTCs). For each , PTC, the method obtains text representatives with the help of the edge image of the input frame. Then for each text representative, we divide it into four quadrants and check a new Mutual Nearest Neighbor Symmetry (MNNS) based on the dominant stroke width distances of the four quadrants. A voting method is finally proposed to clasify each text block as either 2D or 3D by counting the text representatives that satisfy MNNS. Experimental results on clasifying 2D and 3D text images are promising, and the result re further validated by text detection and recognition before clasification and after clasification with the exiting methods, respectively

    Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

    Full text link
    Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.Comment: 10 pages in Conference on Computer Vision and Pattern Recognition (CVPR), 201

    Local wavelet features for statistical object classification and localisation

    Get PDF
    This article presents a system for texture-based probabilistic classification and localisation of 3D objects in 2D digital images and discusses selected applications. The objects are described by local feature vectors computed using the wavelet transform. In the training phase, object features are statistically modelled as normal density functions. In the recognition phase, a maximisation algorithm compares the learned density functions with the feature vectors extracted from a real scene and yields the classes and poses of objects found in it. Experiments carried out on a real dataset of over 40000 images demonstrate the robustness of the system in terms of classification and localisation accuracy. Finally, two important application scenarios are discussed, namely classification of museum artefacts and classification of metallography images

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table
    corecore