1,377 research outputs found

    Video streaming

    Get PDF

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Bitplane image coding with parallel coefficient processing

    Get PDF
    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible

    Performance evaluation of H.264/AVC decoding and visualization using the GPU

    Get PDF
    The coding efficiency of the H.264/AVC standard makes the decoding process computationally demanding. This has limited the availability of cost-effective, high-performance solutions. Modern computers are typically equipped with powerful yet cost-effective Graphics Processing Units (GPUs) to accelerate graphics operations. These GPUs can be addressed by means of a 3-D graphics API such as Microsoft Direct3D or OpenGL, using programmable shaders as generic processing units for vector data. The new CUDA (Compute Unified Device Architecture) platform of NVIDIA provides a straightforward way to address the GPU directly, without the need for a 3-D graphics API in the middle. In CUDA, a compiler generates executable code from C code with specific modifiers that determine the execution model. This paper first presents an own-developed H.264/AVC renderer, which is capable of executing motion compensation (MC), reconstruction, and Color Space Conversion (CSC) entirely on the GPU. To steer the GPU, Direct3D combined with programmable pixel and vertex shaders is used. Next, we also present a GPU-enabled decoder utilizing the new CUDA architecture from NVIDIA. This decoder performs MC, reconstruction, and CSC on the GPU as well. Our results compare both GPU-enabled decoders, as well as a CPU-only decoder in terms of speed, complexity, and CPU requirements. Our measurements show that a significant speedup is possible, relative to a CPU-only solution. As an example, real-time playback of high-definition video (1080p) was achieved with our Direct3D and CUDA-based H.264/AVC renderers

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Optimized Scalable Image and Video Transmission for MIMO Wireless Channels

    Get PDF
    In this chapter, we focus on proposing new strategies to efficiently transfer a compressed image/video content through wireless links using a multiple antenna technology. The proposed solutions can be considered as application layer physical layer (APP-PHY) cross layer design methods as they involve optimizing both application and physical layers. After a wide state-of-the-art study, we present two main solutions. The first focuses on using a new precoding algorithm that takes into account the image/video content structure when assigning transmission powers. We showed that its results are better than the existing conventional precoders. Second, a link adaptation process is integrated to efficiently assign coding parameters as a function of the channel state. Simulations over a realistic channel environment show that the link adaptation activates a dynamic process that results in a good image/video reconstruction quality even if the channel is varying. Finally, we incorporated soft decoding algorithms at the receiver side, and we showed that they could induce further improvements. In fact, almost 5 dB peak signal-to-noise ratio (PSNR) improvements are demonstrated in the case of transmission over a Rayleigh channel

    Rinnakkainen toteutus H.265 videokoodaus standardille

    Get PDF
    The objective of this study was to research the scalability of the parallel features in the new H.265 video compression standard, also know as High Efficiency Video Coding (HEVC). Compared to its predecessor, the H.264 standard, H.265 typically achieves around 50% bitrate reduction for the same subjective video quality. Especially videos with higher resolution (Full HD and beyond) achieve better compression ratios. Also a better utilization of parallel computing resources is provided. H.265 introduces two novel parallelization features: Tiles and Wavefront Parallel Processing (WPP). In Tiles, each video frame is divided into areas that can be decoded without referencing to other areas in the same frame. In WPP, the relations between code blocks in a frame are encoded so that the decoding process can progress through the frame as a front using multiple threads. In this study, the reference implementation for the H.265 decoder was augmented to support both of these parallelization features. The performance of the parallel implementations was measured using three different setups. From the measurement results it could be seen that the introduction of more CPU cores reduced the total decode time of the video frames to a certain point. When using the Tiles feature, it was observed that the encoding geometry, i.e. how each frame was divided into individually decodable areas, had a noticeable effect on the decode times with certain thread counts. When using WPP, it was observed that what was mostly synchronization overhead, sometimes had a negative effect on the decode times when using larger (4-12) amounts of threads.TÀmÀn tutkimuksen aiheena oli tutkia uuden H.265 videonpakkausstandardin (tunnetaan myös nimellÀ HEVC (engl. High Efficiency Video Coding)) rinnakkaisuusominaisuuksien skaalautuvuutta. Verrattuna edeltÀjÀÀnsÀ, H.264 videonpakkaustandardiin, H.265 tyypillisesti saavuttaa samalla kuvanlaadulla noin 50% pienemmÀn pakkauskoon. Erityisesti suuren resoluution videoilla (Full HD ja suuremmat) pakkaustehokkuuden paremmuus korostuu. Huomiota on kiinnitetty myös moniydinprosessoreiden hyödyntÀmiseen videokoodauksessa. H.265 tarjoaa kaksi uutta rinnakkaisuusominaisuutta: niin kutsutut Tiles- ja WPP-menetelmÀt (engl. \emph{Wavefront Parallel Processing}). Tiles-menetelmÀssÀ jokainen videon kuva jaetaan alueisiin, jotka voidaan purkaa viittaamatta saman kuvan muihin alueisiin. WPP-menetelmÀssÀ suhteet kuvan lohkoihin pakataan siten ettÀ purkamisprosessi pystyy etenemÀÀn kuvan lÀpi rintamana hyödyntÀen useampia sÀikeitÀ. TÀssÀ tutkimuksessa H.265 videodekooderin referenssitoteutusta laajennettiin tukemaan molempia nÀistÀ rinnakkaisuusominaisuuksista. SuorituskykyÀ mitattiin kÀyttÀen kolmea eri mittausasetelmaa. Mittaustuloksista ilmeni, ettÀ prosessoriydinten lukumÀÀrÀn kasvattaminen nopeutti videoiden purkamista tiettyyn pisteeseen asti. Tiles-menetelmÀÀ mitatessa havaittiin, ettÀ alueiden geometrialla, eli kuinka kuva jaettiin riippumattomiin alueisiin, on huomattava vaikutus purkamisnopeuteen tietyillÀ sÀiemÀÀrillÀ. WPP-menetelmÀÀ mitattaessa havaittiin ettÀ korkeampiin sÀiemÀÀriin (4-12) siirryttÀessÀ purkamisnopeus alkoi hidastua. TÀmÀ johtui pÀÀasiassa sÀikeiden keskinÀiseen synkronointiin kuluvasta ajasta

    Evaluation of cross-layer reliability mechanisms for satellite digital multimedia broadcast

    Get PDF
    This paper presents a study of some reliability mechanisms which may be put at work in the context of Satellite Digital Multimedia Broadcasting (SDMB) to mobile devices such as handheld phones. These mechanisms include error correcting codes, interleaving at the physical layer, erasure codes at intermediate layers and error concealment on the video decoder. The evaluation is made on a realistic satellite channel and takes into account practical constraints such as the maximum zapping time and the user mobility at several speeds. The evaluation is done by simulating different scenarii with complete protocol stacks. The simulations indicate that, under the assumptions taken here, the scenario using highly compressed video protected by erasure codes at intermediate layers seems to be the best solution on this kind of channel
    • 

    corecore