137 research outputs found

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    An MPEG-7 scheme for semantic content modelling and filtering of digital video

    Get PDF
    Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users

    Everything You Wanted to Know About MPEG-7: Part 2

    Get PDF
    Part 1 of this article provided an overview of the development, functionality, and applicability of MPEG-7. In Part 2 we discuss the description of MPEG-7 s concepts, terminology, and requirements. We then compare MPEG-7 with other approaches to multimedia content description

    Supporting Adaptive and Adaptable Hypermedia Presentation Semantics

    Get PDF
    Having the content of a presentation adapt to the needs, resources and prior activities of a user can be an important benefit of electronic documents. While part of this adaptation is related to the encodings of individual data streams, much of the adaptation can/should be guided by the semantics in and among the objects of the presentation. The semantics involved in having hypermedia presentations adapt can be divided between adaptive hypermedia, which adapts autonomously, and adaptable hypermedia, which requires presentationexternal intervention to be adapted. Understanding adaptive and adaptable hypermedia and the differences between them helps in determining the best manner with which to have a particular hypermedia implementation adapt to the varying circumstances of its presentation. The choice of which type of semantics to represent can affect speed of the database management system processing them. This paper reflects on research and implementation approaches toward both adaptive and adaptable hypermedia and how they apply to specifying the semantics involved in hypermedia authoring and processing. We look at adaptive approaches by considering CMIF and SMIL. The adaptable approaches are represented by the SGML-related collection of formats and the Standard Reference Model (SRM) for IPMS are also reviewed. Based on our experience with both adaptive and adaptable hypermedia, we offer recommendations on how each approach can be supported at the data storage level

    Topic Map Generation Using Text Mining

    Get PDF
    Starting from text corpus analysis with linguistic and statistical analysis algorithms, an infrastructure for text mining is described which uses collocation analysis as a central tool. This text mining method may be applied to different domains as well as languages. Some examples taken form large reference databases motivate the applicability to knowledge management using declarative standards of information structuring and description. The ISO/IEC Topic Map standard is introduced as a candidate for rich metadata description of information resources and it is shown how text mining can be used for automatic topic map generation

    Processing Structured Hypermedia - A Matter of Style

    Get PDF
    Vliet, J.C. van [Promotor]Eliens, A. [Copromotor
    • 

    corecore