2,112 research outputs found

    Synote: weaving media fragments and linked data

    No full text
    While end users could easily share and tag the multimedia resources online, the searching and reusing of the inside content of multimedia, such as a certain area within an image or a ten minutes segment within a one-hour video, is still difficult. Linked data is a promising way to interlink media fragments with other resources. Many applications in Web 2.0 have generated large amount of external annotations linked to media fragments. In this paper, we use Synote as the target application to discuss how media fragments could be published together with external annotations following linked data principles. Our design solves the dereferencing, describing and interlinking methods problems in interlinking multimedia. We also implement a model to let Google index media fragments which improves media fragments' online presence. The evaluation shows that our design can successfully publish media fragments and annotations for both semantic Web agents and traditional search engines. Publishing media fragments using the design we describe in this paper will lead to better indexing of multimedia resources and their consequent findabilit

    Let Google index your media fragments

    No full text
    Current multimedia applications in Web 2.0 have generated a massive amount of multimedia resources, but most search results for multimedia resources still focus on the whole re-source level. Media fragments expose the inside content of multimedia resources for annotations, but they are yet fully explored and indexed by major search engines. W3C has published Media Fragment 1.0 as a standard way to describe media fragments on the Web. In this proposal, we make use of Google's Ajax Application Crawler to index media fragments represented by Media Fragment URIs. Each media fragment with related annotations will have an individual snapshot page, which could be indexed by the crawler. Initial evaluation has shown that the snapshot pages are successfully fetched by Googlebot and we are expecting more media fragments to be indexed using this method, so that the search for multimedia resources would be more efficient

    Implementation strategies for efficient media fragment retrieval

    Get PDF
    The current Web specifications such as HTML still treat video and audio resources as 'foreign' objects on the Web, especially lacking a transparent integration with current Web content. The Media Fragments URI specification is part of various efforts at W3C trying to make media a "first class citizen" on the Web. More specifically, with a Media Fragment URI, one can point to a media fragment by means of a URI, enabling people to identify, share, link, and consume media fragments in a standardized way. In this paper, we propose and evaluate a number of implementation strategies for Media Fragments. Additionally, we present two optimized implementation strategies: a Media Fragment Translation Service allowing to keep existing Web infrastructure such as Web servers and proxies and a fully integrated Media Fragments URI server that is independent of underlying media formats. Finally, we show how multiple bit rate media delivery can be deployed in a Media Fragments aware environment, using our Media Fragments URI server

    Weaving the Web(VTT) of Data

    Get PDF
    International audienceVideo has become a first class citizen on the Web with broad support in all common Web browsers. Where with struc- tured mark-up on webpages we have made the vision of the Web of Data a reality, in this paper, we propose a new vi- sion that we name the Web(VTT) of Data, alongside with concrete steps to realize this vision. It is based on the evolving standards WebVTT for adding timed text tracks to videos and JSON-LD, a JSON-based format to serial- ize Linked Data. Just like the Web of Data that is based on the relationships among structured data, the Web(VTT) of Data is based on relationships among videos based on WebVTT files, which we use as Web-native spatiotemporal Linked Data containers with JSON-LD payloads. In a first step, we provide necessary background information on the technologies we use. In a second step, we perform a large- scale analysis of the 148 terabyte size Common Crawl corpus in order to get a better understanding of the status quo of Web video deployment and address the challenge of integrat- ing the detected videos in the Common Crawl corpus into the Web(VTT) of Data. In a third step, we open-source an online video annotation creation and consumption tool, targeted at videos not contained in the Common Crawl cor- pus and for integrating future video creations, allowing for weaving the Web(VTT) of Data tighter, video by video

    A review of the state of the art in Machine Learning on the Semantic Web: Technical Report CSTR-05-003

    Get PDF

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants
    corecore