55,837 research outputs found

    Improving the utilization of training samples in visual recognition

    Full text link
    Recognition is a fundamental computer vision problem, in which training samples are used to learn models, that then assign labels to test samples. The utilization of training samples is of vital importance to visual recognition, which can be addressed by increasing the capability of the description methods and the model learning methods. Two visual recognition tasks namely object detection and action recognition and are considered in this thesis. Active learning utilizes selected subsets of the training dataset as training samples. Active learning methods select the most informative training samples in each iteration, and therefore require fewer training samples to attain comparable performance to passive learning methods. In this thesis, an active learning method for object detection that exploits the distribution of training samples is presented. Experiments show that the proposed method outperforms a passive learning method and a simple margin active learning method. Weakly supervised learning facilitates learning on training samples with weak labels. In this thesis, a weakly supervised object detection method is proposed to utilize training samples with probabilistic labels. Base detectors are used to create object proposals from training samples with weak labels. Then the object proposals are assigned estimated probabilistic labels. A Generalized Hough Transform based object detector is extended to utilize the object proposals with probabilistic labels as training samples. The proposed method is shown to outperform both a comparison method that assigns strong labels to object proposals, and a weakly supervised deformable part-based models method. The proposed method also attains comparable performance to supervised learning methods. Increasing the capability of the description method can improve the utilization of training samples. In this thesis, temporal pyramid histograms are proposed to address the problem of missing temporal information in the classical bag of features description method used in action recognition. Experiments show that the proposed description method outperforms the classical bag of features method in action recognition

    Detecting complex events in user-generated video using concept classifiers

    Get PDF
    Automatic detection of complex events in user-generated videos (UGV) is a challenging task due to its new characteristics differing from broadcast video. In this work, we firstly summarize the new characteristics of UGV, and then explore how to utilize concept classifiers to recognize complex events in UGV content. The method starts from manually selecting a variety of relevant concepts, followed byconstructing classifiers for these concepts. Finally, complex event detectors are learned by using the concatenated probabilistic scores of these concept classifiers as features. Further, we also compare three different fusion operations of probabilistic scores, namely Maximum, Average and Minimum fusion. Experimental results suggest that our method provides promising results. It also shows that Maximum fusion tends to give better performance for most complex events

    Coherent Multi-Sentence Video Description with Variable Level of Detail

    Full text link
    Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description are mainly focused on single sentence generation and produce descriptions at a fixed level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from the SR. To produce consistent multi-sentence descriptions, we model across-sentence consistency at the level of the SR by enforcing a consistent topic. We also contribute both to the visual recognition of objects proposing a hand-centric approach as well as to the robust generation of sentences using a word lattice. Human judges rate our multi-sentence descriptions as more readable, correct, and relevant than related work. To understand the difference between more detailed and shorter descriptions, we collect and analyze a video description corpus of three levels of detail.Comment: 10 page
    corecore