14,968 research outputs found

    Layer Selection in Progressive Transmission of Motion-Compensated JPEG2000 Video

    Get PDF
    MCJ2K (Motion-Compensated JPEG2000) is a video codec based on MCTF (Motion- Compensated Temporal Filtering) and J2K (JPEG2000). MCTF analyzes a sequence of images, generating a collection of temporal sub-bands, which are compressed with J2K. The R/D (Rate-Distortion) performance in MCJ2K is better than the MJ2K (Motion JPEG2000) extension, especially if there is a high level of temporal redundancy. MCJ2K codestreams can be served by standard JPIP (J2K Interactive Protocol) servers, thanks to the use of only J2K standard file formats. In bandwidth-constrained scenarios, an important issue in MCJ2K is determining the amount of data of each temporal sub-band that must be transmitted to maximize the quality of the reconstructions at the client side. To solve this problem, we have proposed two rate-allocation algorithms which provide reconstructions that are progressive in quality. The first, OSLA (Optimized Sub-band Layers Allocation), determines the best progression of quality layers, but is computationally expensive. The second, ESLA (Estimated-Slope sub-band Layers Allocation), is sub-optimal in most cases, but much faster and more convenient for real-time streaming scenarios. An experimental comparison shows that even when a straightforward motion compensation scheme is used, the R/D performance of MCJ2K competitive is compared not only to MJ2K, but also with respect to other standard scalable video codecs

    Macroblock-level mode based adaptive in-band motion compensated temporal filtering

    Get PDF

    Macroblock-Level Mode Based Adaptive in-Band Motion Compensated Temporal Filtering

    Full text link

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Multi-view image coding with wavelet lifting and in-band disparity compensation

    Get PDF

    Wavelet-based denoising for 3D OCT images

    Get PDF
    Optical coherence tomography produces high resolution medical images based on spatial and temporal coherence of the optical waves backscattered from the scanned tissue. However, the same coherence introduces speckle noise as well; this degrades the quality of acquired images. In this paper we propose a technique for noise reduction of 3D OCT images, where the 3D volume is considered as a sequence of 2D images, i.e., 2D slices in depth-lateral projection plane. In the proposed method we first perform recursive temporal filtering through the estimated motion trajectory between the 2D slices using noise-robust motion estimation/compensation scheme previously proposed for video denoising. The temporal filtering scheme reduces the noise level and adapts the motion compensation on it. Subsequently, we apply a spatial filter for speckle reduction in order to remove the remainder of noise in the 2D slices. In this scheme the spatial (2D) speckle-nature of noise in OCT is modeled and used for spatially adaptive denoising. Both the temporal and the spatial filter are wavelet-based techniques, where for the temporal filter two resolution scales are used and for the spatial one four resolution scales. The evaluation of the proposed denoising approach is done on demodulated 3D OCT images on different sources and of different resolution. For optimizing the parameters for best denoising performance fantom OCT images were used. The denoising performance of the proposed method was measured in terms of SNR, edge sharpness preservation and contrast-to-noise ratio. A comparison was made to the state-of-the-art methods for noise reduction in 2D OCT images, where the proposed approach showed to be advantageous in terms of both objective and subjective quality measures

    Shift Estimation Algorithm for Dynamic Sensors With Frame-to-Frame Variation in Their Spectral Response

    Get PDF
    This study is motivated by the emergence of a new class of tunable infrared spectral-imaging sensors that offer the ability to dynamically vary the sensor\u27s intrinsic spectral response from frame to frame in an electronically controlled fashion. A manifestation of this is when a sequence of dissimilar spectral responses is periodically realized, whereby in every period of acquired imagery, each frame is associated with a distinct spectral band. Traditional scene-based global shift estimation algorithms are not applicable to such spectrally heterogeneous video sequences, as a pixel value may change from frame to frame as a result of both global motion and varying spectral response. In this paper, a novel algorithm is proposed and examined to fuse a series of coarse global shift estimates between periodically sampled pairs of nonadjacent frames to estimate motion between consecutive frames; each pair corresponds to two nonadjacent frames of the same spectral band. The proposed algorithm outperforms three alternative methods, with the average error being one half of that obtained by using an equal weights version of the proposed algorithm, one-fourth of that obtained by using a simple linear interpolation method, and one-twentieth of that obtained by using a naiÂżve correlation-based direct method

    A Novel H.264/AVC Based Multi-View Video Coding Scheme

    Get PDF

    Wavelet-based video coding: optimal use ofmotion information for the decoding of spatially scaled video sequences

    Get PDF
    In this paper we discuss the how to best handle motion vectors in spatially scalable wavelet-based video decoders. Motion vectors with full resolution are normally included in the bit-steams relative to spatially scaled version of a video sequence. When a low-resolution version of the original sequence is received, the decoder must scale the motion vectors accordingly. We will show that the motion vector scaling (truncation) is not the best solution and that better results can be obtained by interpolating the subsampled sequence to full resolution using of the wavelet synthesis low-pass filter. We illustrate the results of experiments carried out with an in-band wavelet-based fully scalable coder that performs spatial analysis, followed by temporal filtering. Emphasis is given to the computation of the Overcomplete DWT in the spatially scalable scenario. 1
    • …
    corecore