26 research outputs found
Overview of Image Processing and Various Compression Schemes
Image processing is key research among researchers. Compression of images are required when need of transmission or storage of images. Demand of multimedia growth, contributes to insufficient bandwidth of network and memory storage device. Advance imaging requires capacity of extensive amounts of digitized information. Therefore data compression is more required for reducing data redundancy to save more hardware space and transmission bandwidth. Various techniques are given for image compression. Some of which are discussed in this paper
Enhancement layer inter frame coding for 3D dynamic point clouds
In recent years, Virtual Reality (VR) and Augmented Reality (AR) applications have seen a drastic increase in commercial popularity. Different representations have been used to create 3D reconstructions for AR and VR. Point clouds are one such representation that are characterized by their simplicity and versatil
Volumetric 3D Point Cloud Attribute Compression: Learned polynomial bilateral filter for prediction
We extend a previous study on 3D point cloud attribute compression scheme
that uses a volumetric approach: given a target volumetric attribute function
, we quantize and encode parameters
that characterize at the encoder, for reconstruction
at known 3D points at the decoder.
Specifically, parameters are quantized coefficients of B-spline basis
vectors (for order ) that span the function space
at a particular resolution , which are coded from
coarse to fine resolutions for scalability. In this work, we focus on the
prediction of finer-grained coefficients given coarser-grained ones by learning
parameters of a polynomial bilateral filter (PBF) from data. PBF is a
pseudo-linear filter that is signal-dependent with a graph spectral
interpretation common in the graph signal processing (GSP) field. We
demonstrate PBF's predictive performance over a linear predictor inspired by
MPEG standardization over a wide range of point cloud datasets
Learned Nonlinear Predictor for Critically Sampled 3D Point Cloud Attribute Compression
We study 3D point cloud attribute compression via a volumetric approach:
assuming point cloud geometry is known at both encoder and decoder, parameters
of a continuous attribute function are quantized to and encoded, so that discrete
samples can be recovered at known 3D points
at the decoder. Specifically, we consider a
nested sequences of function subspaces , where is a family
of functions spanned by B-spline basis functions of order , is the
projection of on and encoded as low-pass coefficients
, and is the residual function in orthogonal subspace
(where ) and encoded as high-pass coefficients . In
this paper, to improve coding performance over [1], we study predicting
at level given at level and encoding of
for the case (RAHT()). For the prediction, we formalize RAHT(1) linear
prediction in MPEG-PCC in a theoretical framework, and propose a new nonlinear
predictor using a polynomial of bilateral filter. We derive equations to
efficiently compute the critically sampled high-pass coefficients
amenable to encoding. We optimize parameters in our resulting feed-forward
network on a large training set of point clouds by minimizing a rate-distortion
Lagrangian. Experimental results show that our improved framework outperformed
the MPEG G-PCC predictor by to in bit rate reduction