62 research outputs found
Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing
Free-viewpoint video conferencing allows a participant to observe the remote
3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint
image is commonly synthesized using two pairs of transmitted texture and depth
maps from two neighboring captured viewpoints via depth-image-based rendering
(DIBR). To maintain high quality of synthesized images, it is imperative to
contain the adverse effects of network packet losses that may arise during
texture and depth video transmission. Towards this end, we develop an
integrated approach that exploits the representation redundancy inherent in the
multiple streamed videos a voxel in the 3D scene visible to two captured views
is sampled and coded twice in the two views. In particular, at the receiver we
first develop an error concealment strategy that adaptively blends
corresponding pixels in the two captured views during DIBR, so that pixels from
the more reliable transmitted view are weighted more heavily. We then couple it
with a sender-side optimization of reference picture selection (RPS) during
real-time video coding, so that blocks containing samples of voxels that are
visible in both views are more error-resiliently coded in one view only, given
adaptive blending will erase errors in the other view. Further, synthesized
view distortion sensitivities to texture versus depth errors are analyzed, so
that relative importance of texture and depth code blocks can be computed for
system-wide RPS optimization. Experimental results show that the proposed
scheme can outperform the use of a traditional feedback channel by up to 0.82
dB on average at 8% packet loss rate, and by as much as 3 dB for particular
frames
Bitstream-Corrupted Video Recovery: A Novel Benchmark Dataset and Method
The past decade has witnessed great strides in video recovery by specialist
technologies, like video inpainting, completion, and error concealment.
However, they typically simulate the missing content by manual-designed error
masks, thus failing to fill in the realistic video loss in video communication
(e.g., telepresence, live streaming, and internet video) and multimedia
forensics. To address this, we introduce the bitstream-corrupted video (BSCV)
benchmark, the first benchmark dataset with more than 28,000 video clips, which
can be used for bitstream-corrupted video recovery in the real world. The BSCV
is a collection of 1) a proposed three-parameter corruption model for video
bitstream, 2) a large-scale dataset containing rich error patterns, multiple
corruption levels, and flexible dataset branches, and 3) a plug-and-play module
in video recovery framework that serves as a benchmark. We evaluate
state-of-the-art video inpainting methods on the BSCV dataset, demonstrating
existing approaches' limitations and our framework's advantages in solving the
bitstream-corrupted video recovery problem. The benchmark and dataset are
released at https://github.com/LIUTIGHE/BSCV-Dataset.Comment: Accepted by NeurIPS Dataset and Benchmark Track 202
GRACE: Loss-Resilient Real-Time Video through Neural Codecs
In real-time video communication, retransmitting lost packets over
high-latency networks is not viable due to strict latency requirements. To
counter packet losses without retransmission, two primary strategies are
employed -- encoder-based forward error correction (FEC) and decoder-based
error concealment. The former encodes data with redundancy before transmission,
yet determining the optimal redundancy level in advance proves challenging. The
latter reconstructs video from partially received frames, but dividing a frame
into independently coded partitions inherently compromises compression
efficiency, and the lost information cannot be effectively recovered by the
decoder without adapting the encoder.
We present a loss-resilient real-time video system called GRACE, which
preserves the user's quality of experience (QoE) across a wide range of packet
losses through a new neural video codec. Central to GRACE's enhanced loss
resilience is its joint training of the neural encoder and decoder under a
spectrum of simulated packet losses. In lossless scenarios, GRACE achieves
video quality on par with conventional codecs (e.g., H.265). As the loss rate
escalates, GRACE exhibits a more graceful, less pronounced decline in quality,
consistently outperforming other loss-resilient schemes. Through extensive
evaluation on various videos and real network traces, we demonstrate that GRACE
reduces undecodable frames by 95% and stall duration by 90% compared with FEC,
while markedly boosting video quality over error concealment methods. In a user
study with 240 crowdsourced participants and 960 subjective ratings, GRACE
registers a 38% higher mean opinion score (MOS) than other baselines
Video modeling via implicit motion representations
Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems
Multi-View Frame Reconstruction with Conditional GAN
Multi-view frame reconstruction is an important problem particularly when
multiple frames are missing and past and future frames within the camera are
far apart from the missing ones. Realistic coherent frames can still be
reconstructed using corresponding frames from other overlapping cameras. We
propose an adversarial approach to learn the spatio-temporal representation of
the missing frame using conditional Generative Adversarial Network (cGAN). The
conditional input to each cGAN is the preceding or following frames within the
camera or the corresponding frames in other overlapping cameras, all of which
are merged together using a weighted average. Representations learned from
frames within the camera are given more weight compared to the ones learned
from other cameras when they are close to the missing frames and vice versa.
Experiments on two challenging datasets demonstrate that our framework produces
comparable results with the state-of-the-art reconstruction method in a single
camera and achieves promising performance in multi-camera scenario.Comment: 5 pages, 4 figures, 3 tables, Accepted at IEEE Global Conference on
Signal and Information Processing, 201
- …