1,381 research outputs found

    A framework for automatic semantic video annotation

    Get PDF
    The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ā€˜semantic gapā€™. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other. This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation

    Commonsense Properties from Query Logs and Question Answering Forums

    No full text
    Commonsense knowledge about object properties, human behavior and general concepts is crucial for robust AI applications. However, automatic acquisition of this knowledge is challenging because of sparseness and bias in online sources. This paper presents Quasimodo, a methodology and tool suite for distilling commonsense properties from non-standard web sources. We devise novel ways of tapping into search-engine query logs and QA forums, and combining the resulting candidate assertions with statistical cues from encyclopedias, books and image tags in a corroboration step. Unlike prior work on commonsense knowledge bases, Quasimodo focuses on salient properties that are typically associated with certain objects or concepts. Extensive evaluations, including extrinsic use-case studies, show that Quasimodo provides better coverage than state-of-the-art baselines with comparable quality

    Ontologies across disciplines

    Get PDF

    Support for Internet-Based Commonsense Processing ā€“ Causal Knowledge Discovery Using Japanese ā€œIfā€ Forms

    Full text link
    Abstract. This paper introduces our method for causal knowledge re-trieval from the Internet resources, its results and evaluation of using it in utterance creation process. Our system automatically retrieves common-sensical knowledge from the Web resources by using simple web-mining and information extraction techniques. For retrieving causal knowledge the system uses three of specific several Japanese ā€œif ā€ forms. From the results we can conclude that Japanese web pages indexed by a common search engine spiders are enough to discover common causal relationships and this knowledge can be used for making Human-Computer Interfaces sound more natural and interesting than while using classic methods
    • ā€¦
    corecore