158 research outputs found

    An MPEG-7 scheme for semantic content modelling and filtering of digital video

    Get PDF
    Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users

    Review of Current Student-Monitoring Techniques used in eLearning-Focused recommender Systems and Learning analytics. The Experience API & LIME model Case Study

    Get PDF
    Recommender systems require input information in order to properly operate and deliver content or behaviour suggestions to end users. eLearning scenarios are no exception. Users are current students and recommendations can be built upon paths (both formal and informal), relationships, behaviours, friends, followers, actions, grades, tutor interaction, etc. A recommender system must somehow retrieve, categorize and work with all these details. There are several ways to do so: from raw and inelegant database access to more curated web APIs or even via HTML scrapping. New server-centric user-action logging and monitoring standard technologies have been presented in past years by several groups, organizations and standard bodies. The Experience API (xAPI), detailed in this article, is one of these. In the first part of this paper we analyse current learner-monitoring techniques as an initialization phase for eLearning recommender systems. We next review standardization efforts in this area; finally, we focus on xAPI and the potential interaction with the LIME model, which will be also summarized below

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    An Architecture for TV Content Distributed Search and Retrieval Using the MPEG Query Format (MPQF)

    Get PDF
    Traditional broadcasting of TV contents begins to coexist with new models of user aware content delivery. The definition of interoperable interfaces for precise content search and retrieval between the different involved parties is a requirement for the deployment of the new audiovisual distribution services. This paper presents the design of an architecture based on the MPEG Query Format (MPQF) for providing the necessary interoperability to deploy distributed audiovisual content search and retrieval networks between content producers, distributors, aggregators and consumer devices. A service-oriented architecture based on Web Services technology is defined. This paper also presents how the architecture can be applied to a real scenario, the XAC (Xarxa IP Audiovisual de Catalunya, Audiovisual IP Network of Catalonia). As far as we know, this is the first paper to apply MPQF to TV Content Distributed Search and Retrieval

    WISM'07 : 4th international workshop on web information systems modeling

    Get PDF

    WISM'07 : 4th international workshop on web information systems modeling

    Get PDF

    A Generic Approach and Framework for Managing Complex Information

    Get PDF
    Several application domains, such as healthcare, incorporate domain knowledge into their day-to-day activities to standardise and enhance their performance. Such incorporation produces complex information, which contains two main clusters (active and passive) of information that have internal connections between them. The active cluster determines the recommended procedure that should be taken as a reaction to specific situations. The passive cluster determines the information that describes these situations and other descriptive information plus the execution history of the complex information. In the healthcare domain, a medical patient plan is an example for complex information produced during the disease management activity from specific clinical guidelines. This thesis investigates the complex information management at an application domain level in order to support the day-to-day organization activities. In this thesis, a unified generic approach and framework, called SIM (Specification, Instantiation and Maintenance), have been developed for computerising the complex information management. The SIM approach aims at providing a conceptual model for the complex information at different abstraction levels (generic and entity-specific). In the SIM approach, the complex information at the generic level is referred to as a skeletal plan from which several entity-specific plans are generated. The SIM framework provides comprehensive management aspects for managing the complex information. In the SIM framework, the complex information goes through three phases, specifying the skeletal plans, instantiating entity-specific plans, and then maintaining these entity-specific plans during their lifespan. In this thesis, a language, called AIM (Advanced Information Management), has been developed to support the main functionalities of the SIM approach and framework. AIM consists of three components: AIMSL, AIM ESPDoc model, and AIMQL. The AIMSL is the AIM specification component that supports the formalisation process of the complex information at a generic level (skeletal plans). The AIM ESPDoc model is a computer-interpretable model for the entity-specific plan. AIMQL is the AIM query component that provides support for manipulating and querying the complex information, and provides special manipulation operations and query capabilities, such as replay query support. The applicability of the SIM approach and framework is demonstrated through developing a proof-of-concept system, called AIMS, using the available technologies, such as XML and DBMS. The thesis evaluates the the AIMS system using a clinical case study, which has applied to a medical test request application
    corecore