79,710 research outputs found
Joint Reconstruction of Multi-view Compressed Images
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER
Real-time complexity constrained encoding
Complex software appliances can be deployed on hardware with limited available computational resources. This computational boundary puts an additional constraint on software applications. This can be an issue for real-time applications with a fixed time constraint such as low delay video encoding. In the context of High Efficiency Video Coding (HEVC), a limited number of publications have focused on controlling the complexity of an HEVC video encoder. In this paper, a technique is proposed to control complexity by deciding between 2Nx2N merge mode and full encoding, at different Coding Unit (CU) depths. The technique is demonstrated in two encoders. The results demonstrate fast convergence to a given complexity threshold, and a limited loss in rate-distortion performance (on average 2.84% Bjontegaard delta rate for 40% complexity reduction)
Recommended from our members
A content-aware quantisation mechanism for transform domain distributed video coding
The discrete cosine transform (DCT) is widely applied in modern codecs to remove spatial redundancies, with the resulting DCT coefficients being quantised to achieve compression as well as bit-rate control. In distributed video coding (DVC) architectures like DISCOVER, DCT coefficient quantisation is traditionally performed using predetermined quantisation matrices (QM), which means the compression is heavily dependent on the sequence being coded. This makes bit-rate control challenging, with the situation exacerbated in the coding of high resolution sequences due to QM scarcity and the non-uniform bit-rate gaps between them. This paper introduces a novel content-aware quantisation (CAQ) mechanism to overcome the limitations of existing quantisation methods in transform domain DVC. CAQ creates a frame-specific QM to reduce quantisation errors by analysing the distribution of DCT coefficients. In contrast to the predetermined QM that is applicable to only 4x4 block sizes, CAQ produces QM for larger block sizes to enhance compression at higher resolutions. This provides superior bit-rate control and better output quality by seeking to fully exploit the available bandwidth, which is especially beneficial in bandwidth constrained scenarios. In addition, CAQ generates superior perceptual results by innovatively applying different weightings to the DCT coefficients to reflect the human visual system. Experimental results corroborate that CAQ both quantitatively and qualitatively provides enhanced output quality in bandwidth limited scenarios, by consistently utilising over 90% of available bandwidth
Transform domain distributed video coding using larger transform blocks
Distributed Video Coding (DVC) displays promising performance at low spatial resolutions but begins to struggle as the resolution increases. One of the limiting aspects is its 4x4 block size of Discrete Cosine Transform (DCT) which is often impractical at higher resolutions. This paper investigates the impact of exploiting larger DCT block sizes on the performance of transform domain DVC at higher spatial resolutions. In order to utilize a larger block size in DVC, appropriate quantisers have to be selected and this has been solved by means of incorporating a content-aware quantisation mechanism to generate image specific quantisation matrix for any DCT block size. Experimental results confirm that the larger 8x8 block size consistently exhibit superior RD performance for CIF resolution sequences compared to the smaller 4x4 block sizes. Significant PSNR improvement has been observed for 16x16 block size at 4CIF resolution with up to 1.78dB average PSNR gain compared to its smaller block alternatives
- …