9 research outputs found

    Advanced content-based semantic scene analysis and information retrieval: the SCHEMA project

    Get PDF
    The aim of the SCHEMA Network of Excellence is to bring together a critical mass of universities, research centers, industrial partners and end users, in order to design a reference system for content-based semantic scene analysis, interpretation and understanding. Relevant research areas include: content-based multimedia analysis and automatic annotation of semantic multimedia content, combined textual and multimedia information retrieval, semantic -web, MPEG-7 and MPEG-21 standards, user interfaces and human factors. In this paper, recent advances in content-based analysis, indexing and retrieval of digital media within the SCHEMA Network are presented. These advances will be integrated in the SCHEMA module-based, expandable reference system

    User-interface issues for browsing digital video

    Get PDF
    In this paper we examine a suite of systems for content-based indexing and browsing of digital video and we identify a superset of features and functions which are provided by these systems. From our classification of these we have identified that common to all is the fact of being predominantly technology-based, with little attention paid to actual user requirements. As part of our work we are developing an application for content-based browsing of digital video which will incorporate the most desirable but achievable of the functions of other systems. This will be achieved via a series of continuously refined demonstrator systems from Spring 1999 onwards which will be subjected to analysis of performance in terms of user

    A Database Approach for Modeling and Querying Video Data

    Get PDF
    Indexing video data is essential for providing content based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues even if these are partial or fragmented. From a database perspective, video applications provide beautiful challenges. Next generation database systems will need to provide support for multimedia data (e.g., image, video, audio). These data types require new techniques for their management (i.e., storing, modeling, querying, etc.). Hence new solutions are significant. This paper develops a data model and a rule-based query language for video content based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (objects) of interest in the domain of a video sequence, (2) video frames which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics. This work is a major revision and a consolidation of [12, 13].This is an extended version of the article in: 15th International Conference on Data Engineering, Sydney, Australia, 1999

    Video query formulation

    No full text

    Video Query Formulation 1

    No full text
    Abstract–For developing advanced query formulation methods for general multimedia data, we describe the issues related to video data. We distinguish between the requirements for image retrieval and video retrieval by identifying queryable attributes unique to video data, namely audio, temporal structure, motion, and events. Our approach is based on visual query methods to describe predicates interactively while providing feedback that is as similar as possible to the video data. An initial prototype of our visual query system for video data is presented

    Data modeling and querying in video databases

    No full text
    Multimedia databases have been the subject of extensive research for the last ten years. In particular, indexing and knowledge based representation of semantics associated with video databases are challenging tasks. This research focuses on addressing issues related to event modeling, query formulation, and query processing algorithms for video data. In order to develop viable solutions for content-based retrieval of video data, formal models are needed to capture and represent video events. In this thesis, we propose a Petri-net based formalism, known as Hierarchical Petri-net (HPN), to represent and index video data. HPN\u27s allow multi-level semantic abstractions of events with an arbitrary degree of complexity. We elaborate on how HPN\u27s can capture video data semantics succinctly, and propose algorithms to build a novel video browsing technique that seamlessly integrate low level video semantics such as object movements to higher level semantics representing complex scenarios. Another key contribution of this research is a Petri-net based formalism for content-based video query formulation and associated query processing algorithms. A major issue in this context is handling of inherent imprecision in query specification and data representation of video contents. We elaborate on different ways of specifying video queries and analyze their expressive power. Accordingly, we propose and analyze different techniques for processing video queries in terms of two key performance parameters, namely precision and recall

    Video query:

    No full text
    directions As digital video databases become more and more pervasive, finding video in large databases becomes a major problem. Because of the nature of video (streamed objects), accessing the content of such databases is inherently a time-consuming operation. Enabling intelligent means of video retrieval and rapid video viewing through the processing, analysis, and interpretation of visual content are, therefore, important topics of research. In this paper, we survey the art of video query and retrieval and propose a framework for video-query formulation and video retrieval based on an iterated sequence of navigating, searching, browsing, and viewing. We describe how the rich information media of video in the forms of image, audio, and text can be appropriately used in each stage of the search process to retrieve relevant segments. Also, we address the problem of automatic video annotationattaching meanings to video segments to aid the query steps. Subsequently, we present a novel framework of structural video analysis that focuses on the processing of high-level features as well as low-level visual cues. This processing augments the semantic interpretation of a wide variety of long video segments and assists in the search, navigation, and retrieval of video. We describe several such techniques
    corecore