2,315 research outputs found
Joint Reconstruction of Multi-view Compressed Images
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER
Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements
This paper addresses the problem of distributed coding of images whose
correlation is driven by the motion of objects or positioning of the vision
sensors. It concentrates on the problem where images are encoded with
compressed linear measurements. We propose a geometry-based correlation model
in order to describe the common information in pairs of images. We assume that
the constitutive components of natural images can be captured by visual
features that undergo local transformations (e.g., translation) in different
images. We first identify prominent visual features by computing a sparse
approximation of a reference image with a dictionary of geometric basis
functions. We then pose a regularized optimization problem to estimate the
corresponding features in correlated images given by quantized linear
measurements. The estimated features have to comply with the compressed
information and to represent consistent transformation between images. The
correlation model is given by the relative geometric transformations between
corresponding features. We then propose an efficient joint decoding algorithm
that estimates the compressed images such that they stay consistent with both
the quantized measurements and the correlation model. Experimental results show
that the proposed algorithm effectively estimates the correlation between
images in multi-view datasets. In addition, the proposed algorithm provides
effective decoding performance that compares advantageously to independent
coding solutions as well as state-of-the-art distributed coding schemes based
on disparity learning
Efficient Motion Field Interpolation Method for Wyner-Ziv Video Coding
Wyner-Ziv video coding has the capability to reduce video encoding complexity by shifting motion estimation procedure from encoder to decoder. Amongst many motion estimation methods, expectation maximization algorithm is the most effective one. Unfortunately, the implementation of block-based motion estimation in this algorithm causes motion field profile bounded by granularity of block size. Nearest-neighbor and bilinear interpolation methods have already applied in multiview image coding to handle similar problem. This paper aims to evaluate performance of both interpolation methods in transform-domain Wyner-Ziv video codec. Results showed that bilinear interpolation effective only for high motion video sequences. In this scenario, it has bitrate saving up to 3.29 %, 0.2 dB higher PSNR, and 12.30 % higher decoding complexity compared to nearest-neighbor. In low motion video content, bitrate saving only gained up to 0.82%, with almost the same PSNR, while decoding complexity increase up to 10.32%.Â
Advances in Stereo Vision
Stereopsis is a vision process whose geometrical foundation has been known for a long time, ever since the experiments by Wheatstone, in the 19th century. Nevertheless, its inner workings in biological organisms, as well as its emulation by computer systems, have proven elusive, and stereo vision remains a very active and challenging area of research nowadays. In this volume we have attempted to present a limited but relevant sample of the work being carried out in stereo vision, covering significant aspects both from the applied and from the theoretical standpoints
LDMIC: Learning-based Distributed Multi-view Image Coding
Multi-view image compression plays a critical role in 3D-related
applications. Existing methods adopt a predictive coding architecture, which
requires joint encoding to compress the corresponding disparity as well as
residual information. This demands collaboration among cameras and enforces the
epipolar geometric constraint between different views, which makes it
challenging to deploy these methods in distributed camera systems with randomly
overlapping fields of view. Meanwhile, distributed source coding theory
indicates that efficient data compression of correlated sources can be achieved
by independent encoding and joint decoding, which motivates us to design a
learning-based distributed multi-view image coding (LDMIC) framework. With
independent encoders, LDMIC introduces a simple yet effective joint context
transfer module based on the cross-attention mechanism at the decoder to
effectively capture the global inter-view correlations, which is insensitive to
the geometric relationships between images. Experimental results show that
LDMIC significantly outperforms both traditional and learning-based MIC methods
while enjoying fast encoding speed. Code will be released at
https://github.com/Xinjie-Q/LDMIC.Comment: Accepted by ICLR 202
Summative Stereoscopic Image Compression using Arithmetic Coding
Image compression targets at plummeting the amount of bits required for image representation for save storage space and speed up the transmission over network. The reduction of size helps to store more images in the disk and take less transfer time in the data network. Stereoscopic image refers to a three dimensional (3D) image that is perceived by the human brain as the transformation of two images that is being sent to the left and right human eyes with distinct phases. However, storing of these images takes twice space than a single image and hence the motivation for this novel approach called Summative Stereoscopic Image Compression using Arithmetic Coding (S2ICAC) where the difference and average of these stereo pair images are calculated, quantized in the case of lossy approach and unquantized in the case of lossless approach, and arithmetic coding is applied. The experimental result analysis indicates that the proposed method achieves high compression ratio and high PSNR value. The proposed method is also compared with JPEG 2000 Position Based Coding Scheme(JPEG 2000 PBCS) and Stereoscopic Image Compression using Huffman Coding (SICHC). From the experimental analysis, it is observed that S2ICAC outperforms JPEG 2000 PBCS as well as SICHC
- …