92 research outputs found

    A Parallel implementation of an mpeg-2 encoder using message-passing

    Get PDF
    The days of film are waning as digital cameras and digital video cameras are becoming commonplace. Uncompressed digital video can consume large amounts of space, making it cumbersome to store efficiently. A method of video compression was developed by the Motion Pictures Expert Group (MPEG), and is now an international standard with the International Organization for Standardization (ISO). This thesis deals with the MPEG-2 Video standard, ISO/IEC 13818-2 [2]. The goal of this thesis is to explore the applications of MPEG-2 encoding in a parallel processing paradigm. To achieve this, a sequential MPEG-2 software encoder was obtained from the MPEG Software Simulation Group (MSSG) [18] and modified to be run, in parallel, on a cluster of single-processor Linux workstations using the Message Passing Interface (MPI) [11, 10, 3]. A multi-threaded pipeline of the encoding process was created using Pthreads [6]. The resulting pipelined parallel encoder has been shown to produce compliant elementary MPEG-2 bitstreams for progressive video sequences. Results of simulation showed that the parallel encoder always performed better than the sequential version as the number of processors scaled. However, it did not exhibit the ideal linear speedup that all parallel programs aim to achieve. This is due to the program executing on a set of resources not ideal for the multi-threaded pipeline. The ensuing chapters will provide the motivation for this work, and an overview of MPEG in addition to parallel processing and programming. Also forthcoming will be how it was achieved and the results produced. Supplementary applications of this work will also be discussed

    Generalized parallelization methodology for video coding

    Get PDF
    This paper describes a generalized parallelization methodology for mapping video coding algorithms onto a multiprocessing architecture, through systematic task decomposition, scheduling and performance analysis. It exploits data parallelism inherent in the coding process and performs task scheduling base on task data size and access locality with the aim to hide as much communication overhead as possible. Utilizing Petri-nets and task graphs for representation and analysis, the method enables parallel video frame capturing, buffering and encoding without extra communication overhead. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. A H.261 video encoder has been implemented on a TMS320C80 system using this method, and its performance was measured. The theoretical and measured performances are similar in that the measured speedup of the H.261 is 3.67 and 3.76 on four PP for QCIF and 352×240 video, respectively. They correspond to frame rates of 30.7 frame per second (fps) and 9.25 fps, and system efficiency of 91.8% and 94% respectively. As it is, this method is particularly efficient for platforms with small number of parallel processors.published_or_final_versio

    Generalized parallelization methodology for video coding

    Get PDF
    This paper describes a generalized parallelization methodology for mapping video coding algorithms onto a multiprocessing architecture, through systematic task decomposition, scheduling and performance analysis. It exploits data parallelism inherent in the coding process and performs task scheduling base on task data size and access locality with the aim to hide as much communication overhead as possible. Utilizing Petri-nets and task graphs for representation and analysis, the method enables parallel video frame capturing, buffering and encoding without extra communication overhead. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. A H.261 video encoder has been implemented on a TMS320C80 system using this method, and its performance was measured. The theoretical and measured performances are similar in that the measured speedup of the H.261 is 3.67 and 3.76 on four PP for QCIF and 352×240 video, respectively. They correspond to frame rates of 30.7 frame per second (fps) and 9.25 fps, and system efficiency of 91.8% and 94% respectively. As it is, this method is particularly efficient for platforms with small number of parallel processors.published_or_final_versio

    Parallelization methodology for video coding - an implementation on the TMS320C80

    Get PDF
    This paper presents a parallelization methodology for video coding based on the philosophy of hiding as much communications by computation as possible. It models the task/data size, processor cache capacity, and communication contention, through a systematic decomposition and scheduling approach. With the aid of Petri-nets and task graphs for representation and analysis, it employs a triple buffering scheme to enable the functions of frame capture, management, and coding to be performed in parallel. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. To prove its practicality, a H.261 video encoder has been implemented on a TMS320C80 system using the method. Its performance was measured, from which the speedup and efficiency figures were calculated. The only difference detected between the theoretical and measured data is the program control overhead that has not been accounted for in the theoretical model. Even with this, the measured speedup of the H.261 is 3.67 and 3.76 on four parallel processors (PPs) for QCIF and 352 × 240 video, respectively, which correspond to frame rate of 30.7 and 9.25 frames per second, and system efficiency of 91.8% and 94%, respectively. This method is particularly efficient for platforms with small number of parallel processors.published_or_final_versio

    Design and implementation of parallel video encoding strategies using divisible load analysis

    Get PDF
    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems using either the software or the hardware approach has attracted much attention in the area of real time video coding. In this paper, we attempt to implement a video encoder on a bus network. Usually, for such a parallel system, the key concern is associated with partitioning and balancing of the computational load among the processors such that the overall processing time of the video encoder is minimized. With the use of the divisible load theory (DLT) paradigm, a strip-wise load partitioning/balancing scheme, a load distribution strategy, two implementation strategies are developed to exploit the data parallelism inherent in the video encoding process. The striking feature of our design is that,both the granularity of the load partitions and all the associated overheads caused during parallel video encoding process can be explicitly considered. This significantly contributes to the minimization of the overall processing time of the video encoder. Extensive experimental studies are carried out to test the effectiveness of the proposed strategies. The performance of the parallel video encoder is quantified using the metrics speedup and performance gain, respectively. The experimental results show that our strategies are effective for exploiting the available parallelism inherent in the video encoding process and provide a theoretical insight on how to analytically quantify and minimize the overall processing time of a parallel system. The proposed strategies can be easily extended and applied to improve other existing parallel systems

    Adaptive parallel video-coding algorithm

    Get PDF
    Parallel encoding of video inevitably frame rate gives varying rate performance due to dynamically changing video content and motion field since the encoding process of each macro-block, especially motion estimation, is data dependent. A multiprocessor schedule optimized for a particular frame with certain macro-block encoding time may not be optimized towards another frame with different encoding time, which causes performance degradation to the parallelization. To tackle this problem, we propose a method based on a batch of near-optimal schedules generated at compile-time and a run-time mechanism to select the schedule giving the shortest predicted critical path length. This method has the advantage of being near-optimal using compile-time schedules while involving only run-time selection rather than re-scheduling. Implementation on the IBM SP2 multiprocessor system using 24 processors gives an average speedup of about 13.5 (frame rate of 38.5 frames per second) for a CIF sequence consisting of segments of 6 different scenes. This is equivalent to an average improvement of about 16.9% over the single schedule scheme with schedule adapted to each of the scenes. Using an open test sequence consisting of 8 video segments, the average improvement achieved is 13.2%, i.e. an average speedup of 13.3 (35.6 frames per second).published_or_final_versio

    Optimization of 3-D Wavelet Decomposition on Multiprocessors

    Get PDF
    In this work we discuss various ideas for the optimization of 3-D wavelet/subband decomposition on shared memory MIMD computers. We theoretically evaluate the characteristics of these approaches and verify the results on parallel computers. Experimental results are conducted on a shared memory as well as a virtual shared memory architecture

    Video Multicast in (large) local area networks

    Get PDF
    We consider the problem of distributing high-quality video signals over IP multicast in large Local Area Networks (LANs), under real-time delay constraints, and with software-only processing. In a large LAN (such as the network of a university campus, or the network of a large company), the source of channel heterogeneity that the video communications system must cope with is not that of different bandwidth constraints available to each receiver, but it is essentially variations in the available CPU power that each receiver will have in order to decode the incoming signal. In this paper we propose a new architecture for a video multicast system, present the design of the different components of this system, and show results obtained in a real implementation. Our feeds consist of video encoded at about 3 Mbits/sec and 16 frames/sec, capable of tolerating the loss of about 300 Kbits/sec worth of data. Decoding is performed on Linux PCs, and the quality of our reconstructed signals degrades gracefully with the speed of the CPU on which the receiver runs

    Language and compiler support for stream programs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 153-166).Stream programs represent an important class of high-performance computations. Defined by their regular processing of sequences of data, stream programs appear most commonly in the context of audio, video, and digital signal processing, though also in networking, encryption, and other areas. Stream programs can be naturally represented as a graph of independent actors that communicate explicitly over data channels. In this work we focus on programs where the input and output rates of actors are known at compile time, enabling aggressive transformations by the compiler; this model is known as synchronous dataflow. We develop a new programming language, StreamIt, that empowers both programmers and compiler writers to leverage the unique properties of the streaming domain. StreamIt offers several new abstractions, including hierarchical single-input single-output streams, composable primitives for data reordering, and a mechanism called teleport messaging that enables precise event handling in a distributed environment. We demonstrate the feasibility of developing applications in StreamIt via a detailed characterization of our 34,000-line benchmark suite, which spans from MPEG-2 encoding/decoding to GMTI radar processing. We also present a novel dynamic analysis for migrating legacy C programs into a streaming representation. The central premise of stream programming is that it enables the compiler to perform powerful optimizations. We support this premise by presenting a suite of new transformations. We describe the first translation of stream programs into the compressed domain, enabling programs written for uncompressed data formats to automatically operate directly on compressed data formats (based on LZ77). This technique offers a median speedup of 15x on common video editing operations.(cont.) We also review other optimizations developed in the StreamIt group, including automatic parallelization (offering an 11x mean speedup on the 16-core Raw machine), optimization of linear computations (offering a 5.5x average speedup on a Pentium 4), and cache-aware scheduling (offering a 3.5x mean speedup on a StrongARM 1100). While these transformations are beyond the reach of compilers for traditional languages such as C, they become tractable given the abundant parallelism and regular communication patterns exposed by the stream programming model.by William Thies.Ph.D

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design
    corecore