11,938 research outputs found

    Optical network technologies for future digital cinema

    Get PDF
    Digital technology has transformed the information flow and support infrastructure for numerous application domains, such as cellular communications. Cinematography, traditionally, a film based medium, has embraced digital technology leading to innovative transformations in its work flow. Digital cinema supports transmission of high resolution content enabled by the latest advancements in optical communications and video compression. In this paper we provide a survey of the optical network technologies for supporting this bandwidth intensive traffic class. We also highlight the significance and benefits of the state of the art in optical technologies that support the digital cinema work flow

    Fast compressed domain watermarking of MPEG multiplexed streams

    Get PDF
    In this paper, a new technique for watermarking of MPEG compressed video streams is proposed. The watermarking scheme operates directly in the domain of MPEG multiplexed streams. Perceptual models are used during the embedding process in order to preserve the quality of the video. The watermark is embedded in the compressed domain and is detected without the use of the original video sequence. Experimental evaluation demonstrates that the proposed scheme is able to withstand a variety of attacks. The resulting watermarking system is very fast and reliable, and is suitable for copyright protection and real-time content authentication applications

    Single-shot compressed ultrafast photography: a review

    Get PDF
    Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields

    Optimised design of nested oblong tube energy absorbers under lateral impact loading

    Get PDF
    Dynamic lateral crushing of mild steel (DIN 2393) nested tube systems was conducted using a ZWICK ROELL impact tester. The tests were performed with impact velocities ranging between 3 and 5 m/s, achieved using a fixed mass impinging onto the specimens under the influence of gravity. The various nested tube systems consisted of one standard and one optimised design. Their crushing behaviour and energy absorption capabilities were obtained and analysed. In addition to the experimental work, numerical simulations using the explicit code LS-DYNA were conducted; boundary conditions matching those observed in experiments were applied to the models. Results from the numerical method were compared against those obtained from experiments. An over-prediction in force-deflection responses was obtained from the numerical code. An attempt was made to explain this inconsistency on the basis of the formation of plastic hinges and the validity of strain rate parameters used in the Cowper Symonds relation. It was found that the optimised energy absorbers exhibited a more desirable force-deflection response than their standard counterparts due to a simple design modification which was incorporated in the optimised design

    Statistical framework for video decoding complexity modeling and prediction

    Get PDF
    Video decoding complexity modeling and prediction is an increasingly important issue for efficient resource utilization in a variety of applications, including task scheduling, receiver-driven complexity shaping, and adaptive dynamic voltage scaling. In this paper we present a novel view of this problem based on a statistical framework perspective. We explore the statistical structure (clustering) of the execution time required by each video decoder module (entropy decoding, motion compensation, etc.) in conjunction with complexity features that are easily extractable at encoding time (representing the properties of each module's input source data). For this purpose, we employ Gaussian mixture models (GMMs) and an expectation-maximization algorithm to estimate the joint execution-time - feature probability density function (PDF). A training set of typical video sequences is used for this purpose in an offline estimation process. The obtained GMM representation is used in conjunction with the complexity features of new video sequences to predict the execution time required for the decoding of these sequences. Several prediction approaches are discussed and compared. The potential mismatch between the training set and new video content is addressed by adaptive online joint-PDF re-estimation. An experimental comparison is performed to evaluate the different approaches and compare the proposed prediction scheme with related resource prediction schemes from the literature. The usefulness of the proposed complexity-prediction approaches is demonstrated in an application of rate-distortion-complexity optimized decoding

    Quality of service in distributed multimedia systems

    Get PDF
    The Unix operating system made a vital contribution to information technology by introducing the notion of composing complicated applications out of simple ones by means of pipes and shell scripts. One day, this will also be possible with multimedia applications. Before this can happen, however, operating systems must support multimedia in as general a way as Unix now supports ordinary applications. Particularly, attention must be paid to allowing the operating-system service to degrade gracefully under heavy loads.\ud This paper presents the Quality-of-Service architecture of the Huygens project. This architecture provides the mechanisms that allow applications to adapt the level of their service to the resources the operating system can make available
    corecore