149,333 research outputs found

    IPEA: the digital archive use case

    Get PDF
    Now is the time to migrate tape-based media archives to digital file-based archives for television broadcasters. These archives not only address the issue of tape-deterioration, they also create new possibilities for opening up the archive. However, the switch from tape-based to file-based is something only the very big television broadcasters can manage individually. Outer- broadcasters should work together to accomplish this task. In the Flemish part of Belgium, the two largest broadcasters in Flanders, namely the commercial broadcaster VMMa and the public broadcaster VRT, the television facilities supporting company Videohouse, and different university research groups associated with the Interdisciplinary Institute for Broadband Technology joined forces and started the "Innovative Platform on Electronic Archiving" project. The goal of this project is to develop common standards for the exchange and archiving of audio-visual data. In this paper, we give a detailed overview of this project and its different research topics

    Copyright Fair Use—Case Law and Legislation

    Get PDF
    Fair use is a judicially formulated concept which allows persons other than the copyright owner to use copyrighted material without permission. The present comment sets forth the rather unsettled case law definition of fair use, and recommends an analyser for delineating the relationship between fair use and an equally amorphous copyright concept, substantial similarity. This delineation is then assessed in light of the codification of fair use proposed in the copyright legislation now pending before Congress

    Benchmarking news recommendations: the CLEF NewsREEL use case

    Get PDF
    The CLEF NewsREEL challenge is a campaign-style evaluation lab allowing participants to evaluate and optimize news recommender algorithms. The goal is to create an algorithm that is able to generate news items that users would click, respecting a strict time constraint. The lab challenges participants to compete in either a "living lab" (Task 1) or perform an evaluation that replays recorded streams (Task 2). In this report, we discuss the objectives and challenges of the NewsREEL lab, summarize last year's campaign and outline the main research challenges that can be addressed by participating in NewsREEL 2016

    Deriving use case diagrams from business process models

    Get PDF
    In this paper we introduce a technique to simplify requirements capture. The technique can be used to derive functional requirements, specified in the form of UML use case diagrams, from existing business process models. Because use case diagrams have to be constructed by performing interviews, and business process models usually are available in a company, use case diagrams can be produced more quickly when derived from business proces models. The use case diagrams that result from applying the technique, specify a software system that provides automated support for the original business processes. We also show how the technique was successfully evaluated in practice

    Campus Bridging Use Case - Initial Prioritization

    Get PDF
    XSEDE is supported by National Science Foundation Grant 1053575 (XSEDE: eXtreme Science and Engineering Discovery Environment)

    Big Data in HEP: A comprehensive use case study

    Full text link
    Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.Comment: Proceedings for 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016
    corecore