8 research outputs found

    Novel Video Coder Using Multiwavelets

    Get PDF

    Multiple-symbol parallel decoding for variable length codes

    Full text link

    Pipelined implementation of Jpeg image compression using Hdl

    Full text link
    This thesis presents the architecture and design of a JPEG compressor for color images using VHDL. The system consists of major parts like color space converter, down sampler, 2-D DCT module, quantization, zigzag scanning and entropy coDing The color space conversion transforms the RGB colors to YCbCr color coDing The down sampling operation reduces the sampling rate of the color information (Cb and Cr). The 2-D DCT transform the pixel data from the spatial domain to the frequency domain. The quantization operation eliminates the high frequency components and the small amplitude coefficients of the co-sine expansion. Finally, the entropy coding uses run-length encoding (RLE), Huffman, variable length coding (VLC) and differential coding to decrease the number of bits used to represent the image. The JPEG compression is a lossy compression, since downsampling and quantization operations are irreversible. But the losses can be controlled in order to keep the necessary image quality; Architectures for these parts were designed and described in VHDL. The results were observed using Active-HDL simulator and the code being synthesized using xilinx ise for vertex-4 FPGA. This pipelined architecture has a minimum latency of 187 clock cycles

    Error resilient image transmission using T-codes and edge-embedding

    Get PDF
    Current image communication applications involve image transmission over noisy channels, where the image gets damaged. The loss of synchronization at the decoder due to these errors increases the damage in the reconstructed image. Our main goal in this research is to develop an algorithm that has the capability to detect errors, achieve synchronization and conceal errors.;In this thesis we studied the performance of T-codes in comparison with Huffman codes. We develop an algorithm for the selection of best T-code set. We have shown that T-codes exhibit better synchronization properties when compared to Huffman Codes. In this work we developed an algorithm that extracts edge patterns from each 8x8 block, classifies edge patterns into different classes. In this research we also propose a novel scrambling algorithm to hide edge pattern of a block into neighboring 8x8 blocks of the image. This scrambled hidden data is used in the detection of errors and concealment of errors. We also develop an algorithm to protect the hidden data from getting damaged in the course of transmission

    Computing SpMV on FPGAs

    Get PDF
    There are hundreds of papers on accelerating sparse matrix vector multiplication (SpMV), however, only a handful target FPGAs. Some claim that FPGAs inherently perform inferiorly to CPUs and GPUs. FPGAs do perform inferiorly for some applications like matrix-matrix multiplication and matrix-vector multiplication. CPUs and GPUs have too much memory bandwidth and too much floating point computation power for FPGAs to compete. However, the low computations to memory operations ratio and irregular memory access of SpMV trips up both CPUs and GPUs. We see this as a leveling of the playing field for FPGAs. Our implementation focuses on three pillars: matrix traversal, multiply-accumulator design, and matrix compression. First, most SpMV implementations traverse the matrix in row-major order, but we mix column and row traversal. Second, To accommodate the new traversal the multiply accumulator stores many intermediate y values. Third, we compress the matrix to increase the transfer rate of the matrix from RAM to the FPGA. Together these pillars enable our SpMV implementation to perform competitively with CPUs and GPUs

    Parallel architectures for entropy coding in a dual-standard ultra-HD video encoder

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Includes bibliographical references (p. 97-98).The mismatch between the rapid increase in resolution requirements and the slower increase in energy capacity demand more aggressive low-power circuit design techniques to maintain battery life of hand-held multimedia devices. As the operating voltage is lowered to reduce power consumption, the maximum operating frequency of the system must also decrease while the performance requirements remain constant. To meet these performance constraints imposed by the high resolution and complex functionality of video processing systems, novel techniques for increasing throughput are explored. In particular, the entropy coding functional block faces the most stringent requirements to deliver the necessary throughput due to its highly serial nature, especially to sustain real-time encoding. This thesis proposes parallel architectures for high-performance entropy coding for high-resolution, dual-standard video encoding. To demonstrate the most aggressive techniques for achieving standard reconfigurability, two markedly different video compression standards (H.264/AVC and VC-1) are supported. Specifically, the entropy coder must process data generated from a quad full-HD (4096x2160 pixels per frame, the equivalent of four full-HD frames) video at a frame rate of 30 frames per second and perform lossless compression to generate an output bitstream. This block will be integrated into a dual-standard video encoder chip targeted for operation at 0.6V, which will be fabricated following the completion of this thesis. Parallelism, as well as other techniques applied at the syntax element or bit level, are used to achieve the overall throughput requirements. Three frames of video data are processed in parallel at the system level, and varying degrees of parallelism are employed within the entropy coding block for each standard. The VC-1 entropy encoder block encodes 735M symbols per second with a gate count of 136.6K and power consumption of 304.5 pW, and the H.264 block encodes 4.97G binary symbols per second through three-frame parallelism and a 6-bin cascaded pipelining architecture with a critical path delay of 20.05 ns.by Bonnie K. Y. Lam.S.M

    Architekturkonzepte für prozessorbasierte MPEG Videodecoder mit Schwerpunkt für mobile Anwendungen

    Get PDF
    Mobile Endgeräte basieren z. Zt. auf Systemen (System on a Chip), deren Hauptfunktionalität durch den Einsatz entsprechender Software auf eingebetteten Prozessoren gebildet wird. Aufgrund der immer kürzeren Innovationszyklen kommt der softwarebasierten Umsetzung von Applikationen für eingebettete Systeme damit eine steigende Bedeutung zu. Im Bereich der mobilfunkgestützten Anwendungen sind in zunehmendem Maße auch Multimediaapplikationen vorzufinden, die bis vor kurzem noch leistungsfähigen Systemen, wie z.B. Desktop-PCs oder dedizierten Implementierungen in Form spezieller ASICs, vorbehalten waren. Hierzu zählen auch Anwendungen, die auf entsprechenden Standards basierend, die echtzeitfähige Übertragung von Bewegtbilddaten unterstützen, wie z.B. Videotelefonie oder Broadcastdienste. Dem hohen Rechenleistungsbedarf echtzeitkritischer Multimediaapplikationen stehen hierbei jedoch die relativ leistungsschwachen Prozessoren eingebetteter Systeme gegenüber. Im Bereich der Videocodierung für Mobilfunkanwendungen hat sich der MPEG-4-Standard weitgehend etabliert und wird hier stellvertretend für alle weiteren MPEG-Videostandards ausführlich beschrieben und analysiert. Die vorliegende Arbeit befasst sich daher im Kern mit dem Entwurf einer speziellen Befehls-satz- und Coprozessorerweiterung für MPEG-4-basierte Videodecoderalgorithmen, womit ein Performancegewinn von ca. 50% gegenüber einer reinen Softwareimplementierung erzielt wird. Die Erweiterungen sind in ihrer Definition generisch und daher nicht auf einen speziellen Prozessortyp zugeschnitten. Im vorliegenden Falle wird eine für Mobilfunkterminals typische RISC-Architektur herangezogen, um die Leistungsfähigkeit unter Beweis zu stellen und den Einsatz in eingebetteten Systemen aufzuzeigen. Die einzelnen Konzepte werden auf Basis einer Algorithmenanalyse hergeleitet, wobei erst eine Beschreibung der generischen Erweiterung erfolgt und anschließend die Integration in den verwendeten Prozessorkern unter Verwendung der Hardwarebeschreibungssprache VHDL beschrieben wird. Für die Bemessung des Echtzeitverhaltens wird im Rahmen der Arbeit ein spezieller Profiler entworfen, der unter anderem auch die Untersuchung und Optimierung des Speicherzugriffs-verhaltens gestattet. Mit Hilfe des Profilers werden Messungen durchgeführt, die den Rechenzeitgewinn der jewei-ligen Algorithmenteile unter Zuhilfenahme der implementierten Optimierungen aufzeigen. Ebenso wird ein Vergleich der Leistungsfähigkeit der vorgestellten Architektur mit gängigen Prozessorarchitekturen, wie Superskalare und VLIW-Prozessoren, vorgenommen. Hierbei wird ermittelt, dass das entwickelte Konzept ähnliche Resultate erbringt wie die vergleichsweise komplexeren Prozessoren. Neben der Leistungsfähigkeit steht auch die Ermittlung des Flächenbedarfs im Falle einer CMOS Gate-Array-basierten Implementierung zur Diskussion und wird ebenfalls für jede einzelne Erweiterung dargestellt

    Using Radio Frequency and Motion Sensing to Improve Camera Sensor Systems

    Get PDF
    Camera-based sensor systems have advanced significantly in recent years. This advancement is a combination of camera CMOS (complementary metal-oxide-semiconductor) hardware technology improvement and new computer vision (CV) algorithms that can better process the rich information captured. As the world becoming more connected and digitized through increased deployment of various sensors, cameras have become a cost-effective solution with the advantages of small sensor size, intuitive sensing results, rich visual information, and neural network-friendly. The increased deployment and advantages of camera-based sensor systems have fueled applications such as surveillance, object detection, person re-identification, scene reconstruction, visual tracking, pose estimation, and localization. However, camera-based sensor systems have fundamental limitations such as extreme power consumption, privacy-intrusive, and inability to see-through obstacles and other non-ideal visual conditions such as darkness, smoke, and fog. In this dissertation, we aim to improve the capability and performance of camera-based sensor systems by utilizing additional sensing modalities such as commodity WiFi and mmWave (millimeter wave) radios, and ultra-low-power and low-cost sensors such as inertial measurement units (IMU). In particular, we set out to study three problems: (1) power and storage consumption of continuous-vision wearable cameras, (2) human presence detection, localization, and re-identification in both indoor and outdoor spaces, and (3) augmenting the sensing capability of camera-based systems in non-ideal situations. We propose to use an ultra-low-power, low-cost IMU sensor, along with readily available camera information, to solve the first problem. WiFi devices will be utilized in the second problem, where our goal is to reduce the hardware deployment cost and leverage existing WiFi infrastructure as much as possible. Finally, we will use a low-cost, off-the-shelf mmWave radar to extend the sensing capability of a camera in non-ideal visual sensing situations.Doctor of Philosoph
    corecore