646 research outputs found

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    Configurable and Scalable Turbo Decoder for 4G Wireless Receivers

    Get PDF
    The increasing requirements of high data rates and quality of service (QoS) in fourth-generation (4G) wireless communication require the implementation of practical capacity approaching codes. In this chapter, the application of Turbo coding schemes that have recently been adopted in the IEEE 802.16e WiMax standard and 3GPP Long Term Evolution (LTE) standard are reviewed. In order to process several 4G wireless standards with a common hardware module, a reconfigurable and scalable Turbo decoder architecture is presented. A parallel Turbo decoding scheme with scalable parallelism tailored to the target throughput is applied to support high data rates in 4G applications. High-level decoding parallelism is achieved by employing contention-free interleavers. A multi-banked memory structure and routing network among memories and MAP decoders are designed to operate at full speed with parallel interleavers. A new on-line address generation technique is introduced to support multiple Turbo interleaving patterns, which avoids the interleaver address memory that is typically necessary in the traditional designs. Design trade-offs in terms of area and power efficiency are analyzed for different parallelism and clock frequency goals

    Implementation Effort and Parallelism - Metrics for Guiding Hardware/Software Partitioning in Embedded System Design

    Get PDF

    Configurable and Scalable High Throughput Turbo Decoder Architecture for Multiple 4GWireless Standards

    Get PDF
    In this paper, we propose a novel multi-code turbo decoder architecture for 4G wireless systems. To support various 4G standards, a configurable multi-mode MAP (maximum a posteriori) decoder is designed for both binary and duo-binary turbo codes with small resource overhead (less than 10%) compared to the single-mode architecture. To achieve high data rates in 4G, we present a parallel turbo decoder architecture with scalable parallelism tailored to the given throughput requirements. High-level parallelism is achieved by employing contention-free interleavers. Multi-banked memory structure and routing network among memories and MAP decoders are designed to operate at full speed with parallel interleavers. We designed a very low-complexity recursive on-line address generator supporting multiple interleaving patterns, which avoids the interleaver address memory. Design trade-offs in terms of area and power efficiency are explored to find the optimal architectures. A 711 Mbps data rate is feasible with 32 Radix-4 MAP decoders running at 200 MHz clock rate.Texas Instruments Incorporate

    Exploring High Level Synthesis to Improve the Design of Turbo Code Error Correction in a Software Defined Radio Context

    Get PDF
    With the ever improving progress of technology, Software Defined Radio (SDR) has become a more widely available technique for implementing radio communication. SDRs are sought after for their advantages over traditional radio communication mostly in flexibility, and hardware simplification. The greatest challenges SDRs face are often with their real time performance requirements. Forward error correction is an example of an SDR block that can exemplify these challenges as the error correction can be very computationally intensive. Due to these constraints, SDR implementations are commonly found in or alongside Field Programmable Gate Arrays (FPGAs) to enable performance that general purpose processors alone cannot achieve. The main challenge with FPGAs however, is in Register Transfer Level (RTL) development. High Level Synthesis (HLS) tools are a method of creating hardware descriptions from high level code, in an effort to ease this development process. In this work a turbo code decoder, a form of computationally intensive error correction codes, was accelerated with the help of FPGAs, using HLS tools. This accelerator was implemented on a Xilinx Zynq platform, which integrates a hard core ARM processor alongside programmable logic on a single chip. Important aspects of the design process using HLS were identified and explained. The design process emphasizes the idea that for the best results the high level code should be created with a hardware mindset, and written in an attempt to describe a hardware design. The power of the HLS tools was demonstrated in its flexibility by providing a method of tailoring the hardware parameters through simply changing values in a macro file, and by exploration the design space through different data types and three different designs, each one improving from what was learned in the previous implementation. Ultimately, the best hardware implementation was over 56 times faster than the optimized software implementation. Comparing the HLS to a manually optimized design shows that the HLS implementation was able to achieve over a 19% throughput, with many areas for further improvement identified, demonstrating the competitiveness of the HLS tools

    The Chameleon Architecture for Streaming DSP Applications

    Get PDF
    We focus on architectures for streaming DSP applications such as wireless baseband processing and image processing. We aim at a single generic architecture that is capable of dealing with different DSP applications. This architecture has to be energy efficient and fault tolerant. We introduce a heterogeneous tiled architecture and present the details of a domain-specific reconfigurable tile processor called Montium. This reconfigurable processor has a small footprint (1.8 mm2^2 in a 130 nm process), is power efficient and exploits the locality of reference principle. Reconfiguring the device is very fast, for example, loading the coefficients for a 200 tap FIR filter is done within 80 clock cycles. The tiles on the tiled architecture are connected to a Network-on-Chip (NoC) via a network interface (NI). Two NoCs have been developed: a packet-switched and a circuit-switched version. Both provide two types of services: guaranteed throughput (GT) and best effort (BE). For both NoCs estimates of power consumption are presented. The NI synchronizes data transfers, configures and starts/stops the tile processor. For dynamically mapping applications onto the tiled architecture, we introduce a run-time mapping tool
    corecore