6,059 research outputs found

    HTTP adaptive streaming with media fragment URIs

    Get PDF
    HTTP adaptive streaming was introduced with the general idea that user agents interpret a manifest file (describing different representations and segments of the media); where-after they retrieve the media content using sequential HTTP progressive download operations. MPEG started with the standardization of an HTTP streaming protocol, defining the structure and semantics of a manifest file and additional restrictions and extensions for container formats. At the same time, W3C is working on a specification for addressing media fragments on the Web using Uniform Resource Identifiers. The latter not only defines the URI syntax for media fragment identifiers but also the protocol for retrieving media fragments over HTTP. In this paper, we elaborate on the role of Media Fragment URIs within HTTP adaptive streaming scenarios. First, we elaborate on how different media representations can be addressed by means of Media Fragment URIs, by using track fragments. Additionally, we illustrate how HTTP adaptive streaming is realized relying on the Media Fragments URI retrieval protocol. To validate the presented ideas, we implemented Apple's HTTP Live streaming technique using Media Fragment URI

    Towards ontology based event processing

    Get PDF

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Streaming the Web: Reasoning over dynamic data.

    Get PDF
    In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates. Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation. The aim of this paper is threefold: (i) we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii) we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii) we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning. © 2014 Elsevier B.V. All rights reserved

    AsterixDB: A Scalable, Open Source BDMS

    Full text link
    AsterixDB is a new, full-function BDMS (Big Data Management System) with a feature set that distinguishes it from other platforms in today's open source Big Data ecosystem. Its features make it well-suited to applications like web data warehousing, social data storage and analysis, and other use cases related to Big Data. AsterixDB has a flexible NoSQL style data model; a query language that supports a wide range of queries; a scalable runtime; partitioned, LSM-based data storage and indexing (including B+-tree, R-tree, and text indexes); support for external as well as natively stored data; a rich set of built-in types; support for fuzzy, spatial, and temporal types and queries; a built-in notion of data feeds for ingestion of data; and transaction support akin to that of a NoSQL store. Development of AsterixDB began in 2009 and led to a mid-2013 initial open source release. This paper is the first complete description of the resulting open source AsterixDB system. Covered herein are the system's data model, its query language, and its software architecture. Also included are a summary of the current status of the project and a first glimpse into how AsterixDB performs when compared to alternative technologies, including a parallel relational DBMS, a popular NoSQL store, and a popular Hadoop-based SQL data analytics platform, for things that both technologies can do. Also included is a brief description of some initial trials that the system has undergone and the lessons learned (and plans laid) based on those early "customer" engagements

    Detecting Cyber Security Vulnerabilities through Reactive Programming

    Get PDF
    We propose a software architectural model, which uses reactive programming for collecting and filtering live tweets and interpreting their potential correlation to software vulnerabilities and exploits. We aim to investigate if we could discover the existence of exploits for disclosed vulnerabilities in Twitter data streams. Reactive programming is used for performing filtering and querying of tweet to find potential exploits. The result of processing Twitter data streams with reactive programming could be broadcasted, by pointing towards potential exploits, which might create a cyber-attack. They can also be entered as a new entry into existing overt or open source intelligence repositories
    corecore