533 research outputs found

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power

    Novel VLSI Architecture for Quantization and Variable Length Coding for H-264/AVC Video Compression Standard

    Get PDF
    Integrated multimedia systems process text, graphics, and other discrete media such as digital audio and video streams. In an uncompressed state, graphics, audio and video data, especially moving pictures, require large transmission and storage capacities which can be very expensive. Hence video compression has become a key component of any multimedia system or application. The ITU (International Telecommunications Union) and MPEG (Moving Picture Experts Group) have combined efforts to put together the next generation of video compression standard, the H.264/MPEG-4 PartlO/AVC, which was finalized in 2003. The H.264/AVC uses significantly improved and computationally intensive compression techniques to maximize performance. H.264/AVC compliant encoders achieve the same reproduction quality as encoders that are compliant with the previous standards while requiring 60% or less of the bit rate [2]. This thesis aims at designing two basic blocks of an ASIC capable of performing the H.264 video compression. These two blocks, the Quantizer, and Entropy Encoder implement the Baseline Profile of the H.264/AVC standard. The architecture is implemented in Register Transfer Level HDL and synthesized with Synopsys Design Compiler using TSMC 0.25(xm technology, giving us an estimate of the hardware requirements in real-time implementation. The quantizer block is capable of running at 309MHz and has a total area of 785K gates with a power requirement of 88.59mW. The entropy encoder unit is capable of running at 250 MHz and has a total area of 49K gates with a power requirement of 2.68mW. The high speed that is achieved in this thesis simply indicates that the two blocks Quantizer and Entropy Encoder can be used as IP embedded in the HDTV systems

    Current video compression algorithms: Comparisons, optimizations, and improvements

    Full text link
    Compression algorithms have evolved significantly in recent years. Audio, still image, and video can be compressed significantly by taking advantage of the natural redundancies that occur within them. Video compression in particular has made significant advances. MPEG-1 and MPEG-2, two of the major video compression standards, allowed video to be compressed at very low bit rates compared to the original video. The compression ratio for video that is perceptually lossless (losses can\u27t be visually perceived) can even be as high as 40 or 50 to 1 for certain videos. Videos with a small degradation in quality can be compressed at 100 to 1 or more; Although the MPEG standards provided low bit rate compression, even higher quality compression is required for efficient transmission over limited bandwidth networks, wireless networks, and broadcast mediums. Significant gains have been made over the current MPEG-2 standard in a newly developed standard called the Advanced Video Coder, also known as H.264 and MPEG-4 part 10. (Abstract shortened by UMI.)

    Survey of Video Encryption Algorithms

    Get PDF
    Research on security of digital video transmission and storage has been gaining attention from researchers in recent times because of its usage in various applications and transmission of sensitive information through the internet. This is as a result of the swift development in efficient video compression techniques and internet technologies. Encryption which is the widely used technique in securing video communication and storage secures video data in compressed formats. This paper presents a survey of some existing video encryption techniques with an explanation on the concept of video compression. The review which also explored the performance metrics used in the evaluation and comparison of the performance of video encryption algorithms is being believed to give readers a quick summary of some of the available encryption techniques

    An FPGA Based Hardware Accelerator for Remote Surveillance Cameras

    No full text
    The Blackeye II camera, produced by Kinopta, is used for remote security, conservation and traffic flow surveillance. The camera uses an image sensor to acquire photographs which undergo image processing and JPEG encoding on a microprocessor. Although the microprocessor performs other tasks, it is the processing and encoding of images that limit the frame rate of the camera to 2 frames per second (fps). Clients have requested an increase to 12.5 fps while adding more image processing to each photograph. The current microprocessor-based system is unable to achieve this. Custom digital logic systems perform well on processes that naturally form a pipeline, such as the Blackeye II image processing system. This project develops a digital logic system based on an FPGA to receive images from the image sensor, perform the required image processing operations, encode the images in JPEG format and send them on to the microprocessor. The objective is to implement a proof of concept device based upon the Blackeye II’s existing hardware and an FPGA development board. It will implement the proposed pipeline including one example of an image processing operation. A JPEG encoder is designed to process the 752 × 480 greyscale photographs from the image processor in real time. The JPEG encoder consists of four stages: discrete cosine transform (DCT), quantisation, zig-zag buffer and Huffman encoder. The DCT design is based upon the work of Woods et al. [1], which is improved on. An analysis of the relationship between precision and accuracy in the DCT and quantisation stages is used to minimise the system’s resource requirements. The JPEG encoder is successfully tested in simulation. Input and output stages are added to the design. The input stage receives data from the image sensor and removes breaks in the data stream. The output stage must concatenate the data from the JPEG encoder and transmit it to the microprocessor via the microprocessor’s ISI (image sensor interface) peripheral. An image sharpening filter is developed and inserted into the pipeline between the input and JPEG encoder. Because remote surveillance cameras are battery powered, the minimisation of power consumption is a key concern. To minimise power consumption a mechanism is introduced to track those modules in the pipeline that are in use at any time. Any not in use are paused by gating the module’s clock source. Once the system is complete and tested in simulation it is loaded into hardware. The FPGA development board is attached to the image sensor board and microprocessor board of the Blackeye II camera by a purpose-built breakout board. Plugging the microprocessor board into a PC provides a live stream of images proving the successful operation of the FPGA system. The project objectives were exceeded by increasing the frame rate of the Blackeye II to 20 fps, which will not decrease with additional image processing operations. The project was viewed as a success by Kinopta, who have committed to its further development

    On the design of fast and efficient wavelet image coders with reduced memory usage

    Full text link
    Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encodeOliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826Palanci

    ON THE COMPRESSION OF DIGITAL HOLOGRAMS

    Get PDF
    This thesis investigates the compression of computer-generated transmission holograms through lossless schemes such as the Burrows-Wheeler compression scheme (BWCS). Ever since Gabor’s discovery of holography, much research have been done to improve the record­ ing and viewing of holograms into more convenient uses such as video viewing. However, the compression of holograms where recording is performed from virtual scenes has not received much attention. Phase-shift digital holograms, on the other hand, have received more attention due to their practical application in object recognition, imaging, and video sequencing of phys­ ical objects. This study is performed for virtually recorded computer-generated holograms in order to understand compression factors in virtually recorded holograms. We also investigate application of lossless compression schemes to holograms with reduced precision for the in­ tensity and phase values. The overall objective is to explore the factors that affect effective compression of virtual holograms. As a result, this work can be used to assist in the design­ ing of better compression algorithms for applications such as virtual object simulations, video gaming application, and holographic video viewing
    • …
    corecore