11,239 research outputs found

    Adaptive hypermedia for education and training

    Get PDF
    Adaptive hypermedia (AH) is an alternative to the traditional, one-size-fits-all approach in the development of hypermedia systems. AH systems build a model of the goals, preferences, and knowledge of each individual user; this model is used throughout the interaction with the user to adapt to the needs of that particular user (Brusilovsky, 1996b). For example, a student in an adaptive educational hypermedia system will be given a presentation that is adapted specifically to his or her knowledge of the subject (De Bra & Calvi, 1998; Hothi, Hall, & Sly, 2000) as well as a suggested set of the most relevant links to proceed further (Brusilovsky, Eklund, & Schwarz, 1998; Kavcic, 2004). An adaptive electronic encyclopedia will personalize the content of an article to augment the user's existing knowledge and interests (Bontcheva & Wilks, 2005; Milosavljevic, 1997). A museum guide will adapt the presentation about every visited object to the user's individual path through the museum (Oberlander et al., 1998; Stock et al., 2007). Adaptive hypermedia belongs to the class of user-adaptive systems (Schneider-Hufschmidt, Kühme, & Malinowski, 1993). A distinctive feature of an adaptive system is an explicit user model that represents user knowledge, goals, and interests, as well as other features that enable the system to adapt to different users with their own specific set of goals. An adaptive system collects data for the user model from various sources that can include implicitly observing user interaction and explicitly requesting direct input from the user. The user model is applied to provide an adaptation effect, that is, tailor interaction to different users in the same context. In different kinds of adaptive systems, adaptation effects could vary greatly. In AH systems, it is limited to three major adaptation technologies: adaptive content selection, adaptive navigation support, and adaptive presentation. The first of these three technologies comes from the fields of adaptive information retrieval (IR) and intelligent tutoring systems (ITS). When the user searches for information, the system adaptively selects and prioritizes the most relevant items (Brajnik, Guida, & Tasso, 1987; Brusilovsky, 1992b)

    CHORUS Deliverable 4.5: Report of the 3rd CHORUS Conference

    Get PDF
    The third and last CHORUS conference on Multimedia Search Engines took place from the 26th to the 27th of May 2009 in Brussels, Belgium. About 100 participants from 15 European countries, the US, Japan and Australia learned about the latest developments in the domain. An exhibition of 13 stands presented 16 research projects currently ongoing around the world

    Bridging the Semantic Gap in Multimedia Information Retrieval: Top-down and Bottom-up approaches

    No full text
    Semantic representation of multimedia information is vital for enabling the kind of multimedia search capabilities that professional searchers require. Manual annotation is often not possible because of the shear scale of the multimedia information that needs indexing. This paper explores the ways in which we are using both top-down, ontologically driven approaches and bottom-up, automatic-annotation approaches to provide retrieval facilities to users. We also discuss many of the current techniques that we are investigating to combine these top-down and bottom-up approaches

    Processing and Linking Audio Events in Large Multimedia Archives: The EU inEvent Project

    Get PDF
    In the inEvent EU project [1], we aim at structuring, retrieving, and sharing large archives of networked, and dynamically changing, multimedia recordings, mainly consisting of meetings, videoconferences, and lectures. More specifically, we are developing an integrated system that performs audiovisual processing of multimedia recordings, and labels them in terms of interconnected “hyper-events ” (a notion inspired from hyper-texts). Each hyper-event is composed of simpler facets, including audio-video recordings and metadata, which are then easier to search, retrieve and share. In the present paper, we mainly cover the audio processing aspects of the system, including speech recognition, speaker diarization and linking (across recordings), the use of these features for hyper-event indexing and recommendation, and the search portal. We present initial results for feature extraction from lecture recordings using the TED talks. Index Terms: Networked multimedia events; audio processing: speech recognition; speaker diarization and linking; multimedia indexing and searching; hyper-events. 1

    An architecture for life-long user modelling

    Get PDF
    In this paper, we propose a united architecture for the creation of life-long user profiles. Our architecture combines different steps required for a user prole, including feature extraction and representation, reasoning, recommendation and presentation. We discuss various issues that arise in the context of life-long profiling

    On User Modelling for Personalised News Video Recommendation

    Get PDF
    In this paper, we introduce a novel approach for modelling user interests. Our approach captures users evolving information needs, identifies aspects of their need and recommends relevant news items to the users. We introduce our approach within the context of personalised news video retrieval. A news video data set is used for experimentation. We employ a simulated user evaluation
    corecore