78 research outputs found

    Format-independent media resource adaptation and delivery

    Get PDF

    Format-independent and metadata-driven media resource adaptation using semantic web technologies

    Get PDF
    Adaptation of media resources is an emerging field due to the growing amount of multimedia content on the one hand and an increasing diversity in usage environments on the other hand. Furthermore, to deal with a plethora of coding and metadata formats, format-independent adaptation systems are important. In this paper, we present a new format-independent adaptation system. The proposed adaptation system relies on a model that takes into account the structural metadata, semantic metadata, and scalability information of media bitstreams. The model is implemented using the web ontology language. Existing coding formats are mapped to the structural part of the model, while existing metadata standards can be linked to the semantic part of the model. Our new adaptation technique, which is called RDF-driven content adaptation, is based on executing SPARQL Protocol and RDF Query Language queries over instances of the model for media bitstreams. Using different criteria, RDF-driven content adaptation is compared to other adaptation techniques. Next to real-time execution times, RDF-driven content adaptation provides a high abstraction level for the definition of adaptations and allows a seamless integration with existing semantic metadata standards

    NinSuna: a fully integrated platform for format-independent multimedia content adaptation and delivery using Semantic Web technologies

    Get PDF
    The current multimedia landscape is characterized by a significant heterogeneity in terms of coding and delivery formats, usage environments, and user preferences. The main contribution of this paper is a discussion of the design and functioning of a fully integrated platform for multimedia adaptation and delivery, called NinSuna. This platform is able to efficiently deal with the aforementioned heterogeneity in the present-day multimedia ecosystem, thanks to the use of format-agnostic adaptation engines (i.e., engines independent of the underlying coding format) and format-agnostic packaging engines (i.e., engines independent of the underlying delivery format). Moreover, NinSuna also provides a seamless integration between metadata standards and adaptation processes. Both our format-independent adaptation and packaging techniques rely on a model for multimedia bitstreams, describing the structural, semantic, and scalability properties of these multimedia streams. News sequences were used as a test case for our platform, enabling the user to select news fragments matching his/her specific interests and usage environment characteristics

    Active Tags : mastering XML with XML

    Get PDF
    International audienceMany XML languages are defining tags for processing purpose rather than for describing datas : XSLT, Apache's Ant, or more recently XProc. They are all focusing on a single problematic but could rely on a common framework that would supply a set of services and interfaces. This paper discusses Active Tags, a language-independant and general-purpose XML system for native XML programming, that aims to be a generic runtime container for several runnable markup languages. We'll depict the architecture of the system and show a few of its features : browsing non-XML datas with XPath, designing macro-tags, mixing declarative sentences with imperative constructs, and filtering SAX streams with XPath patterns

    Description-driven Adaptation of Media Resources

    Get PDF
    The current multimedia landscape is characterized by a significant diversity in terms of available media formats, network technologies, and device properties. This heterogeneity has resulted in a number of new challenges, such as providing universal access to multimedia content. A solution for this diversity is the use of scalable bit streams, as well as the deployment of a complementary system that is capable of adapting scalable bit streams to the constraints imposed by a particular usage environment (e.g., the limited screen resolution of a mobile device). This dissertation investigates the use of an XML-driven (Extensible Markup Language) framework for the format-independent adaptation of scalable bit streams. Using this approach, the structure of a bit stream is first translated into an XML description. In a next step, the resulting XML description is transformed to reflect a desired adaptation of the bit stream. Finally, the transformed XML description is used to create an adapted bit stream that is suited for playback in the targeted usage environment. The main contribution of this dissertation is BFlavor, a new tool for exposing the syntax of binary media resources as an XML description. Its development was inspired by two other technologies, i.e. MPEG-21 BSDL (Bitstream Syntax Description Language) and XFlavor (Formal Language for Audio-Visual Object Representation, extended with XML features). Although created from a different point of view, both languages offer solutions for translating the syntax of a media resource into an XML representation for further processing. BFlavor (BSDL+XFlavor) harmonizes the two technologies by combining their strengths and eliminating their weaknesses. The expressive power and performance of a BFlavor-based content adaptation chain, compared to tool chains entirely based on either BSDL or XFlavor, were investigated by several experiments. One series of experiments targeted the exploitation of multi-layered temporal scalability in H.264/AVC, paying particular attention to the use of sub-sequences and hierarchical coding patterns, as well as to the use of metadata messages to communicate the bit stream structure to the adaptation logic. BFlavor was the only tool to offer an elegant and practical solution for XML-driven adaptation of H.264/AVC bit streams in the temporal domain

    DIPBench: An Independent Benchmark for Data-Intensive Integration Processes

    Get PDF
    The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration system's performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration system's performance, and we illustrate our reference benchmark implementation for federated DBMS

    Bridging the gap between algorithmic and learned index structures

    Get PDF
    Index structures such as B-trees and bloom filters are the well-established petrol engines of database systems. However, these structures do not fully exploit patterns in data distribution. To address this, researchers have suggested using machine learning models as electric engines that can entirely replace index structures. Such a paradigm shift in data system design, however, opens many unsolved design challenges. More research is needed to understand the theoretical guarantees and design efficient support for insertion and deletion. In this thesis, we adopt a different position: index algorithms are good enough, and instead of going back to the drawing board to fit data systems with learned models, we should develop lightweight hybrid engines that build on the benefits of both algorithmic and learned index structures. The indexes that we suggest provide the theoretical performance guarantees and updatability of algorithmic indexes while using position prediction models to leverage the data distributions and thereby improve the performance of the index structure. We investigate the potential for minimal modifications to algorithmic indexes such that they can leverage data distribution similar to how learned indexes work. In this regard, we propose and explore the use of helping models that boost classical index performance using techniques from machine learning. Our suggested approach inherits performance guarantees from its algorithmic baseline index, but at the same time it considers the data distribution to improve performance considerably. We study single-dimensional range indexes, spatial indexes, and stream indexing, and show that the suggested approach results in range indexes that outperform the algorithmic indexes and have comparable performance to the read-only, fully learned indexes and hence can be reliably used as a default index structure in a database engine. Besides, we consider the updatability of the indexes and suggest solutions for updating the index, notably when the data distribution drastically changes over time (e.g., for indexing data streams). In particular, we propose a specific learning-augmented index for indexing a sliding window with timestamps in a data stream. Additionally, we highlight the limitations of learned indexes for low-latency lookup on real- world data distributions. To tackle this issue, we suggest adding an algorithmic enhancement layer to a learned model to correct the prediction error with a small memory latency. This approach enables efficient modelling of the data distribution and resolves the local biases of a learned model at the cost of roughly one memory lookup.Open Acces

    High Energy Astrophysics Program

    Get PDF
    This report reviews activities performed by members of the USRA (Universities Space Research Association) contract team during the six months during the reporting period (10/95 - 3/96) and projected activities during the coming six months. Activities take place at the Goddard Space Flight Center, within the Laboratory for High Energy Astrophysics. Developments concern instrumentation, observation, data analysis, and theoretical work in Astrophysics. Missions supported include: Advanced Satellite for Cosmology and Astrophysics (ASCA), X-ray Timing Experiment (XTE), X-ray Spectrometer (XRS), Astro-E, High Energy Astrophysics Science, Archive Research Center (HEASARC), and others
    • …
    corecore