23 research outputs found

    Optical network technologies for future digital cinema

    Get PDF
    Digital technology has transformed the information flow and support infrastructure for numerous application domains, such as cellular communications. Cinematography, traditionally, a film based medium, has embraced digital technology leading to innovative transformations in its work flow. Digital cinema supports transmission of high resolution content enabled by the latest advancements in optical communications and video compression. In this paper we provide a survey of the optical network technologies for supporting this bandwidth intensive traffic class. We also highlight the significance and benefits of the state of the art in optical technologies that support the digital cinema work flow

    Comparison of Code-Pass-Skipping Strategies for Accelerating a JPEG 2000 Decoder

    Get PDF
    Abstract Code-Pass-Skipping allows a JPEG 2000 decoder to be accelerated by sacrificing the output precision. This paper presents an evaluation on how the speed gain can be maximized and the quality loss minimized. In particular, the scenario of rendering a 24-bit preview of a Digital Cinema Package (DCP) with the maximum permitted bitrate is examined. A comparison shows that a new proposed strategy outperforms the reference implementation from Kakadu Software v6 by up to 1 dB. Furthermore, it is shown what speed gain can be achieved for a given acceptable quality loss

    'Digital Decay'

    Get PDF
    The fate of 35mm as an acquisition and exhibition medium is intimately connected with questions of future-proofing, archiving, preservation, and access, which are currently at the foreground of recent debates around screen heritage in the UK. In this article, I explore the threat of digital projection to the viability of the 35mm release print, the impact of this on film stock production, and how this will affect film preservation. Whilst these issues are universal, this article is oriented toward a UK perspective

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    Las filmotecas ante el paradigma digital. Retos y perspectivas de futuro de la actividad filmotecaria en la sociedad de la información

    Get PDF
    Esta tesis doctoral aborda la realidad filmotecaria actual en el contexto de cambio al paradigma digital. Para ello, se ha realizado, en primer lugar, una revisión retrospectiva del origen de los archivos fílmicos desde sus inicios en la década de los años treinta del siglo pasado hasta la actualidad, dedicando una atención especial a la realidad española. En segundo lugar, se ha cuestionado la función y el papel que han desempeñado históricamente las filmotecas, como guardianes y preservadoras del patrimonio fílmico, en conflicto con la vocación divulgadora que como centro cultural global relacionado con el cine pudieran llevar a cabo. Este debate, el de conservación/divulgación encuentra un nuevo enfoque con la transformación que ha supuesto para el mundo del cine la irrupción de la tecnología digital. Por último, la presente tesis analiza críticamente los efectos que ha supuesto el acuerdo DCI para la homogeneización de la cadena de valor de la industria cinematográfica. Para ello evidenciamos cómo ese acuerdo, nacido de la voluntad monopolística de las empresas más potentes de la industria cinematográfica, ya tuvo un precedente industrial en la segunda década del siglo XX, y cómo ha proyectado sus efectos de manera desigual en el mapa filmotecario mundial contemporáneo.This thesis deals with the film archive’s situation in the context of change to the digital paradigm. For do it, firstly, we have do a retrospective review of the origin of the film archives since its inception in the decade of the thirties of the past century to the present day, with special attention to Spanish reality. Secondly, we have questioned what have been the functions and the role of the film archives, as keepers and preservers of film heritage, in conflict with the dissemination vocation as a global cultural centre of cinema. This debate, conservation/dissemination, is a new approach with the transformation which has resulted in the emergence of digital technology to the world of cinema. Finally, this thesis examines critically the effects of the “agreement DCI” for the homogenization of the chain of value of the film industry. For that, we showed how that agreement, born of the monopolistic will of the most powerful companies in the film industry, already had an industrial precedent in the second decade of the 20th century, and how has projected its effects unevenly on the contemporary world of film archive’s map

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    Digital rights management techniques for H.264 video

    Get PDF
    This work aims to present a number of low-complexity digital rights management (DRM) methodologies for the H.264 standard. Initially, requirements to enforce DRM are analyzed and understood. Based on these requirements, a framework is constructed which puts forth different possibilities that can be explored to satisfy the objective. To implement computationally efficient DRM methods, watermarking and content based copy detection are then chosen as the preferred methodologies. The first approach is based on robust watermarking which modifies the DC residuals of 4×4 macroblocks within I-frames. Robust watermarks are appropriate for content protection and proving ownership. Experimental results show that the technique exhibits encouraging rate-distortion (R-D) characteristics while at the same time being computationally efficient. The problem of content authentication is addressed with the help of two methodologies: irreversible and reversible watermarks. The first approach utilizes the highest frequency coefficient within 4×4 blocks of the I-frames after CAVLC en- tropy encoding to embed a watermark. The technique was found to be very effect- ive in detecting tampering. The second approach applies the difference expansion (DE) method on IPCM macroblocks within P-frames to embed a high-capacity reversible watermark. Experiments prove the technique to be not only fragile and reversible but also exhibiting minimal variation in its R-D characteristics. The final methodology adopted to enforce DRM for H.264 video is based on the concept of signature generation and matching. Specific types of macroblocks within each predefined region of an I-, B- and P-frame are counted at regular intervals in a video clip and an ordinal matrix is constructed based on their count. The matrix is considered to be the signature of that video clip and is matched with longer video sequences to detect copies within them. Simulation results show that the matching methodology is capable of not only detecting copies but also its location within a longer video sequence. Performance analysis depict acceptable false positive and false negative rates and encouraging receiver operating charac- teristics. Finally, the time taken to match and locate copies is significantly low which makes it ideal for use in broadcast and streaming applications
    corecore