7 research outputs found

    Video coding for Bit rates 64Kbps and below

    Get PDF
    With digital motion video for either transmission or storage, there always exists a trade-off between the data transmission rate and the picture quality. The lower the transmission bit-rate is made, the more the quality of the image tends to degrade. With usual transform coding schemes the degradation usually occurs when low bit-rates, that is less than about 64Kbps, are used. The resultant image tends to suffer visually from a \u27\u27blocking effect. This thesis therefore, is based on the development of a different implementation scheme, for motion video compression or encoding, so as to support both low bit-rates, around 64Kbps or below while eliminating the blocking effect. This scheme is designed round the compression of CIF, QCIF, and NTSC motion video frames, which are defined with three (one luminance and two chrominance) components per frame. These frame sizes are the same sizes used in the well-known lTV-TS standard called Recommendation H261. TI1is implementation scheme therefore closely follows that of the H.261 standard except, where such functions as the DCT transform and other modifications are needed. The Subband DCT transform used here, in replacement for the H.26l transform sub-section, is based on the work done by Yuk-Hee Chan and Wan-Chi Siu. The application of this scheme should provide similar bit-rates to that of the Recommendation H261. However, it should also provide images free of the \u27blocking\u27 effect inherent to all encoders that spatially split the image to blocks before transform coding

    VLSI implementation of a massively parallel wavelet based zerotree coder for the intelligent pixel array

    Get PDF
    In the span of a few years, mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technologic fronts. Mobile video communications in particular encompasses a number of technical hurdles that generally steer technological advancements towards devices that are low in complexity, low in power usage yet perform the given task efficiently. Devices of this nature have been made available through the use of massively parallel processing arrays such as the Intelligent Pixel Processing Array. The Intelligent Pixel Processing array is a novel concept that integrates a parallel image capture mechanism, a parallel processing component and a parallel display component into a single chip solution geared toward mobile communications environments, be it a PDA based system or the video communicator wristwatch portrayed in Dick Tracy episodes. This thesis details work performed to provide an efficient, low power, low complexity solution surrounding the massively parallel implementation of a zerotree entropy codec for the Intelligent Pixel Array

    Image capture using integrated 3D SoftChip technology

    Get PDF
    Mobile multimedia communication has rapidly become a significant area of research and development. The processing requirements for the capture, conversion, compression, decompression, enhancement, display, etc. of high quality multimedia content places heavy demands even on current ULSI (ultra large scale integration) systems, particularly for mobile applications where area and power are primary considerations. The system presented is designed as a vertically integrated (3D) system comprising two distinct layers bonded together using indium bump technology. The top layer is a CMOS imaging array containing analog-to-digital converters, and a buffer memory. The bottom layer takes the form of a configurable array processor (CAP), a highly parallel array of soft programmable processors capable of carrying out complex processing tasks directly on data stored in the top plane. Until recently, the dominant format of data in imaging devices has been analog. The analog photocurrent or sampled voltage is transferred to the ADC via a column or a column/row bus. In the proposed system, an array of analog-to-digital converters is distributed, so that a one-bit cell is associated with one sensor. The analog-to-digital converters are algorithmic current-mode converters. Eight such cells are cascaded to form an 8-bit converter. Additionally, each photosensor is equipped with a current memory cell, and multiple conversions are performed with scaled values of the photocurrent for colour processing
    corecore