4 research outputs found
HITNet: Hierarchical Iterative Tile Refinement Network for Real-time Stereo Matching
This paper presents HITNet, a novel neural network architecture for real-time
stereo matching. Contrary to many recent neural network approaches that operate
on a full cost volume and rely on 3D convolutions, our approach does not
explicitly build a volume and instead relies on a fast multi-resolution
initialization step, differentiable 2D geometric propagation and warping
mechanisms to infer disparity hypotheses. To achieve a high level of accuracy,
our network not only geometrically reasons about disparities but also infers
slanted plane hypotheses allowing to more accurately perform geometric warping
and upsampling operations. Our architecture is inherently multi-resolution
allowing the propagation of information across different levels. Multiple
experiments prove the effectiveness of the proposed approach at a fraction of
the computation required by state-of-the-art methods. At the time of writing,
HITNet ranks 1st-3rd on all the metrics published on the ETH3D website for two
view stereo, ranks 1st on most of the metrics among all the end-to-end learning
approaches on Middlebury-v3, ranks 1st on the popular KITTI 2012 and 2015
benchmarks among the published methods faster than 100ms.Comment: The pretrained models used for submission to benchmarks and sample
evaluation scripts can be found at
https://github.com/google-research/google-research/tree/master/hitne
Fusion4D: Real-time Performance Capture of Challenging Scenes
We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras