12,544 research outputs found

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    A model-based approach to hypermedia design.

    Get PDF
    This paper introduces the MESH approach to hypermedia design, which combines established entity-relationship and object-oriented abstractions with proprietary concepts into a formal hypermedia data model. Uniform layout and link typing specifications can be attributed and inherited in a static node typing hierarchy, whereas both nodes and links can be submitted dynamically to multiple complementary classifications. In addition, the data model's support for a context-based navigation paradigm, as well as a platform-independent implementation framework, are briefly discussed.Data; Model; Specifications; Classification;

    Video Data Visualization System: Semantic Classification And Personalization

    Full text link
    We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the edges are the relation between documents and the classes of documents. Finally, we construct the user's profile, based on the interaction with the system, to render the system more adequate to its references.Comment: graphic

    Generating Presentation Constraints from Rhetorical Structure

    Get PDF
    Hypermedia structured in terms of the higher-level intent of its author can be adapted to a wider variety of final presentations. Many multimedia systems encode such high-level intent as constraints on either time, spatial layout or navigation. Once specified, these constraints are translated into specific presentations whose timelines, screen displays and navigational structure satisfy these constraints. This ensures that the desired spatial, temporal and navigation properties are maintained no matter how the presentation is adapted to varying circumstances. Rhetorical structure defines author intent at a still higher level. Authoring at this level requires that rhetorics can be translated to final presentations that properly reflect them. This paper explores how rhetorical structure can be translated into constraints, which are then translated into final presentations. This enables authoring in terms of rhetorics and provides the assurance that the rhetorics will remain properly conveyed in all presentation adaptation

    Space for Two to Think: Large, High-Resolution Displays for Co-located Collaborative Sensemaking

    Get PDF
    Large, high-resolution displays carry the potential to enhance single display groupware collaborative sensemaking for intelligence analysis tasks by providing space for common ground to develop, but it is up to the visual analytics tools to utilize this space effectively. In an exploratory study, we compared two tools (Jigsaw and a document viewer), which were adapted to support multiple input devices, to observe how the large display space was used in establishing and maintaining common ground during an intelligence analysis scenario using 50 textual documents. We discuss the spatial strategies employed by the pairs of participants, which were largely dependent on tool type (data-centric or function-centric), as well as how different visual analytics tools used collaboratively on large, high-resolution displays impact common ground in both process and solution. Using these findings, we suggest design considerations to enable future co-located collaborative sensemaking tools to take advantage of the benefits of collaborating on large, high-resolution displays

    Generating multimedia presentations: from plain text to screenplay

    Get PDF
    In many Natural Language Generation (NLG) applications, the output is limited to plain text – i.e., a string of words with punctuation and paragraph breaks, but no indications for layout, or pictures, or dialogue. In several projects, we have begun to explore NLG applications in which these extra media are brought into play. This paper gives an informal account of what we have learned. For coherence, we focus on the domain of patient information leaflets, and follow an example in which the same content is expressed first in plain text, then in formatted text, then in text with pictures, and finally in a dialogue script that can be performed by two animated agents. We show how the same meaning can be mapped to realisation patterns in different media, and how the expanded options for expressing meaning are related to the perceived style and tone of the presentation. Throughout, we stress that the extra media are not simple added to plain text, but integrated with it: thus the use of formatting, or pictures, or dialogue, may require radical rewording of the text itself

    Adaptation of scalable multimedia documents

    Full text link
    Several scalable media codecs have been standardized in recent years to cope with heterogeneous usage conditions and to aim at always providing audio, video and image content in the best possible quality. Today, interactive multimedia presentations are becoming accessible on handheld terminals and face the same adaptation challenges as the media elements they present: quite diversified screen, memory and processing power capabilities. In this paper, we address the adaptation of multimedia documents by applying the concept of scalability to their presentation. The Scalable MSTI document model introduced in this paper has been designed with two main requirements in mind. First, the adaptation process must be simple to execute because it may be performed on limited terminals in broadcast scenarios. Second, the adaptation process must be simple to describe so that authored adaptation directives can be transported along with the document with a limited bandwidth overhead. The Scalable MSTI model achieves both objectives by specifying Spatial, Temporal and Interactive scalability axes on which incremental authoring can be performed to create progressive presentation layers. Our experiments are conducted on scalable multimedia documents designed for Digital Radio services on DMB channels using MPEG-4 BIFS and also for web services using XHTML, SVG, SMIL and Flash. A scalable image gallery is described throughout this article and illustrates the features offered by our document model in a rich multimedia example
    • …
    corecore