3,324 research outputs found

    A framework for automatic semantic video annotation

    Get PDF
    The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other. This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation

    The Use of Digital Video Annotation in Teacher Training: The Teachers’ Perspectives

    Get PDF
    The use of digital video offers interesting opportunities in teacher training, particularly the possibilities provided by video annotation, whereby people can add and share comments and opinions on the same videos, even from different places. This exploratory study aims to examine teachers’ perspectives of this technology, taking into account both their explicit and implicit evaluations. Different methods of using video annotation for training are compared, one based on its individual use, another supported by various types of tutorship. The data were collected and analysed first through a quantitative phase, followed by an in-depth qualitative phase. It is pointed out that to make this technology fully operational it is important to address the cultural and psychosocial aspects that control the emotional conditions which arise when one’s teaching behaviour is being observed and assessed

    Interactive Video Annotation Tool

    Get PDF
    Proceedings of: Forth International Workshop on User-Centric Technologies and applications (CONTEXTS 2010). Valencia, 7-10 September , 2010.Abstract: Increasingly computer vision discipline needs annotated video databases to realize assessment tasks. Manually providing ground truth data to multimedia resources is a very expensive work in terms of effort, time and economic resources. Automatic and semi-automatic video annotation and labeling is the faster and more economic way to get ground truth for quite large video collections. In this paper, we describe a new automatic and supervised video annotation tool. Annotation tool is a modified version of ViPER-GT tool. ViPER-GT standard version allows manually editing and reviewing video metadata to generate assessment data. Automatic annotation capability is possible thanks to an incorporated tracking system which can deal the visual data association problem in real time. The research aim is offer a system which enables spends less time doing valid assessment models.Publicad

    Hierarchical recognition of intentional human gestures for sports video annotation

    Full text link
    We present a novel technique for the recognition of complex human gestures for video annotation using accelerometers and the hidden Markov model. Our extension to the standard hidden Markov model allows us to consider gestures at different levels of abstraction through a hierarchy of hidden states. Accelerometers in the form of wrist bands are attached to humans performing intentional gestures, such as umpires in sports. Video annotation is then performed by populating the video with time stamps indicating significant events, where a particular gesture occurs. The novelty of the technique lies in the development of a probabilistic hierarchical framework for complex gesture recognition and the use of accelerometers to extract gestures and significant events for video annotation

    Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval

    Get PDF
    The rapidly increasing amount of video collections, especially on the web, motivated the need for intelligent automated annotation tools for searching, rating, indexing and retrieval purposes. These videos collections contain all types of manually annotated videos. As this annotation is usually incomplete and uncertain and contains misspelling words, search using some keywords almost do retrieve only a portion of videos which actually contains the desired meaning. Hence, this annotation needs filtering, expanding and validating for better indexing and retrieval. In this paper, we present a novel framework for video annotation enhancement, based on merging two widely known commonsense knowledgebases, namely WordNet and ConceptNet. In addition to that, a comparison between these knowledgebases in video annotation domain is presented. Experiments were performed on random wide-domain video clips, from the \emph{vimeo.com} website. Results show that searching for a video over enhanced tags, based on our proposed framework, outperforms searching using the original tags. In addition to that, the annotation enhanced by our framework outperforms both those enhanced by WordNet and ConceptNet individually, in terms of tags enrichment ability, concept diversity and most importantly retrieval performance

    Linked Data based video annotation and browsing for distance learning

    Get PDF
    We present a pair of prototype tools that enable users to mark up video with annotations and later explore related materials using Semantic Web and Linked Data approaches. The ïżœfirst tool helps academics preparing Open University course materials to mark up videos with information about the subject matter and audio-visual content. The second tool enables users, such as students or academics, to find video and other materials relevant to their study

    Attitudes and Experiences of Preservice Teachers Utilizing Video Annotation Software: A Phenomenological Study

    Get PDF
    The purpose of this hermeneutic phenomenological study was to explore preservice teachers’ experiences with video observations at Central University. The theory guiding this study was Bandura’s self-efficacy theory as it provides insights into the internal and external factors that affect an individual’s perception of their capabilities. Self-efficacy is a critical component and goal of field experience observations. The central research question for this hermeneutic phenomenological study was: What are preservice teachers’ attitudes and experiences using video annotation software during field experience? The study was divided into two phases: individual interviews with preservice teachers, audio-visual elicitation interviews, a letter-writing activity, and qualitative data aggregation. Four themes were derived from the participants’ experiences: (a) streamlined reflection, (b) digital detachment, (c) the supervisor variable, and (d) program components’ effect on self-efficacy. Interpretations of the themes included four significant interpretations: (a) video annotation software improves reflection capabilities and personal agency, (b) video annotation software is a field supervision tool, not replacement, (c) convenient but not complete: video annotation software asynchronous communication is not enough, and (d) expectations and structure matter

    Character-angle based video annotation

    Get PDF
    A video annotation system includes clips organization, feature description and pattern determination. This paper aims to present a system for basketball zone-defence detection. Particularly, a character-angle based descriptor for feature description is proposed. The well-performed experimental results in basketball zone-defence detection demonstrate that it is robust for both simulations and real-life cases, with less sensitivity to the distribution caused by local translation of subprime defenders. Such a framework can be easily applied to other team-work sports
    • 

    corecore