486 research outputs found

    An efficient error resilience scheme based on wyner-ziv coding for region-of-Interest protection of wavelet based video transmission

    Get PDF
    In this paper, we propose a bandwidth efficient error resilience scheme for wavelet based video transmission over wireless channel by introducing an additional Wyner-Ziv (WZ) stream to protect region of interest (ROI) in a frame. In the proposed architecture, the main video stream is compressed by a generic wavelet domain coding structure and passed through the error prone channel without any protection. Meanwhile, the predefined ROI area related wavelet coefficients obtained after an integer wavelet transform will be specially protected by WZ codec in an additional channel during transmission. At the decoder side, the error-prone ROI related wavelet coefficients will be used as side information to help decoding the WZ stream. Different size of WZ bit streams can be applied in order to meet different bandwidth condition and different requirement of end users. The simulation results clearly revealed that the proposed scheme has distinct advantages in saving bandwidth comparing with fully applied FEC algorithm to whole video stream and in the meantime offer the robust transmission over error prone channel for certain video applications

    Hybrid Region-based Image Compression Scheme for Mamograms and Ultrasound Images

    Get PDF
    The need for transmission and archive of mammograms and ultrasound Images has dramatically increased in tele-healthcare applications. Such images require large amount of' storage space which affect transmission speed. Therefore an effective compression scheme is essential. Compression of these images. in general. laces a great challenge to compromise between the higher compression ratio and the relevant diagnostic information. Out of the many studied compression schemes. lossless . IPl. (i- LS and lossy SPII IT are found to he the most efficient ones. JPEG-LS and SI'll IT are chosen based on a comprehensive experimental study carried on a large number of mammograms and ultrasound images of different sizes and texture. The lossless schemes are evaluated based on the compression ratio and compression speed. The distortion in the image quality which is introduced by lossy methods evaluated based on objective criteria using Mean Square Error (MSE) and Peak signal to Noise Ratio (PSNR). It is found that lossless compression can achieve a modest compression ratio 2: 1 - 4: 1. bossy compression schemes can achieve higher compression ratios than lossless ones but at the price of the image quality which may impede diagnostic conclusions. In this work, a new compression approach called Ilvbrid Region-based Image Compression Scheme (IIYRICS) has been proposed for the mammograms and ultrasound images to achieve higher compression ratios without compromising the diagnostic quality. In I LYRICS, a modification for JPI; G-LS is introduced to encode the arbitrary shaped disease affected regions. Then Shape adaptive SPIT IT is applied on the remaining non region of interest. The results clearly show that this hybrid strategy can yield high compression ratios with perfect reconstruction of diagnostic relevant regions, achieving high speed transmission and less storage requirement. For the sample images considered in our experiment, the compression ratio increases approximately ten times. However, this increase depends upon the size of the region of interest chosen. It is also föund that the pre-processing (contrast stretching) of region of interest improves compression ratios on mammograms but not on ultrasound images

    RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding

    Full text link
    We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and Games (I3D '19

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    ROI coding of volumetric medical images with application to visualisation

    Get PDF

    Depth-based Multi-View 3D Video Coding

    Get PDF

    Consistent Image Decoding from Multiple Lossy Versions

    Get PDF
    With the recent development of tools for data sharing in social networks and peer to peer networks, the same information is often stored in different nodes. Peer-to-peer protocols usually allow one user to collect portions of the same file from different nodes in the network, substantially improving the rate at which data are received by the end user. In some cases, however, the same multimedia document is available in different lossy versions on the network nodes. In such situations, one may be interested in collecting all available versions of the same document and jointly decoding them to obtain a better reconstruction of the original. In this paper we study some methods to jointly decode different versions of the same image. We compare different uses of the method of Projections Onto Convex Sets (POCS) with some Convex Optimization techniques in order to reconstruct an image for which JPEG and JPEG2000 lossy versions are available

    An efficient error resilience scheme based on Wyner-Ziv coding for region-of-interest protection of wavelet based video transmission

    Get PDF
    In this paper, we propose a bandwidth efficient error resilience scheme for wavelet based video transmission over wireless channel by introducing an additional Wyner-Ziv (WZ) stream to protect region of interest (ROI) in a frame. In the proposed architecture, the main video stream is compressed by a generic wavelet domain coding structure and passed through the error prone channel without any protection. Meanwhile, the predefined ROI area related wavelet coefficients obtained after an integer wavelet transform will be specially protected by WZ codec in an additional channel during transmission. At the decoder side, the error-prone ROI related wavelet coefficients will be used as side information to help decoding the WZ stream. Different size of WZ bit streams can be applied in order to meet different bandwidth condition and different requirement of end users. The simulation results clearly revealed that the proposed scheme has distinct advantages in saving bandwidth comparing with fully applied FEC algorithm to whole video stream and in the meantime offer the robust transmission over error prone channel for certain video applications

    深層学習に基づく画像圧縮と品質評価

    Get PDF
    早大学位記番号:新8427早稲田大

    Efficient multiview depth representation based on image segmentation

    Get PDF
    The persistent improvements witnessed in multimedia production have considerably augmented users demand for immersive 3D systems. Expedient implementation of this technology however, entails the need for significant reduction in the amount of information required for representation. Depth image-based rendering algorithms have considerably reduced the number of images necessary for 3D scene reconstruction, nevertheless the compression of depth maps still poses several challenges due to the peculiar nature of the data. To this end, this paper proposes a novel depth representation methodology that exploits the intrinsic correlation present between colour intensity and depth images of a natural scene. A segmentation-based approach is implemented which decreases the amount of information necessary for transmission by a factor of 24 with respect to conventional JPEG algorithms whilst maintaining a quasi identical reconstruction quality of the 3D views.peer-reviewe
    corecore