7,407 research outputs found
Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing
Free-viewpoint video conferencing allows a participant to observe the remote
3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint
image is commonly synthesized using two pairs of transmitted texture and depth
maps from two neighboring captured viewpoints via depth-image-based rendering
(DIBR). To maintain high quality of synthesized images, it is imperative to
contain the adverse effects of network packet losses that may arise during
texture and depth video transmission. Towards this end, we develop an
integrated approach that exploits the representation redundancy inherent in the
multiple streamed videos a voxel in the 3D scene visible to two captured views
is sampled and coded twice in the two views. In particular, at the receiver we
first develop an error concealment strategy that adaptively blends
corresponding pixels in the two captured views during DIBR, so that pixels from
the more reliable transmitted view are weighted more heavily. We then couple it
with a sender-side optimization of reference picture selection (RPS) during
real-time video coding, so that blocks containing samples of voxels that are
visible in both views are more error-resiliently coded in one view only, given
adaptive blending will erase errors in the other view. Further, synthesized
view distortion sensitivities to texture versus depth errors are analyzed, so
that relative importance of texture and depth code blocks can be computed for
system-wide RPS optimization. Experimental results show that the proposed
scheme can outperform the use of a traditional feedback channel by up to 0.82
dB on average at 8% packet loss rate, and by as much as 3 dB for particular
frames
Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems
Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field
observation mode to enlarge the adaptive-optics-corrected field in a few
specific locations over tens of arc-minutes.
The work-scope provided by open-loop tomography and pupil conjugation is
amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation
aiming to provide enhanced correction across the field with improved
performance over static reconstruction methods and less stringent computational
complexity scaling laws.
Starting from our previous work [1], we use stochastic time-progression
models coupled to approximate sparse measurement operators to outline a
suitable SA-LQG formulation capable of delivering near optimal correction.
Under the spatio-angular framework the wave-fronts are never explicitly
estimated in the volume,providing considerable computational savings on
10m-class telescopes and beyond.
We find that for Raven, a 10m-class MOAO system with two science channels,
the SA-LQG improves the limiting magnitude by two stellar magnitudes when both
Strehl-ratio and Ensquared-energy are used as figures of merit. The
sky-coverage is therefore improved by a factor of 5.Comment: 30 pages, 7 figures, submitted to Applied Optic
- …