609 research outputs found
REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
Reconstructing dynamic 3D garment surfaces with open boundaries from
monocular videos is an important problem as it provides a practical and
low-cost solution for clothes digitization. Recent neural rendering methods
achieve high-quality dynamic clothed human reconstruction results from
monocular video, but these methods cannot separate the garment surface from the
body. Moreover, despite existing garment reconstruction methods based on
feature curve representation demonstrating impressive results for garment
reconstruction from a single image, they struggle to generate temporally
consistent surfaces for the video input. To address the above limitations, in
this paper, we formulate this task as an optimization problem of 3D garment
feature curves and surface reconstruction from monocular video. We introduce a
novel approach, called REC-MV, to jointly optimize the explicit feature curves
and the implicit signed distance field (SDF) of the garments. Then the open
garment meshes can be extracted via garment template registration in the
canonical space. Experiments on multiple casually captured datasets show that
our approach outperforms existing methods and can produce high-quality dynamic
garment surfaces. The source code is available at
https://github.com/GAP-LAB-CUHK-SZ/REC-MV.Comment: CVPR2023; Project Page:https://lingtengqiu.github.io/2023/REC-MV
MegaParallax: Casual 360° Panoramas with Motion Parallax
The ubiquity of smart mobile devices, such as phones and tablets, enables users to casually capture 360° panoramas with a single camera sweep to share and relive experiences. However, panoramas lack motion parallax as they do not provide different views for different viewpoints. The motion parallax induced by translational head motion is a crucial depth cue in daily life. Alternatives, such as omnidirectional stereo panoramas, provide different views for each eye (binocular disparity), but they also lack motion parallax as the left and right eye panoramas are stitched statically. Methods based on explicit scene geometry reconstruct textured 3D geometry, which provides motion parallax, but suffers from visible reconstruction artefacts. The core of our method is a novel multi-perspective panorama representation, which can be casually captured and rendered with motion parallax for each eye on the fly. This provides a more realistic perception of panoramic environments which is particularly useful for virtual reality applications. Our approach uses a single consumer video camera to acquire 200–400 views of a real 360° environment with a single sweep. By using novel-view synthesis with flow-based blending, we show how to turn these input views into an enriched 360° panoramic experience that can be explored in real time, without relying on potentially unreliable reconstruction of scene geometry. We compare our results with existing omnidirectional stereo and image-based rendering methods to demonstrate the benefit of our approach, which is the first to enable casual consumers to capture and view high-quality 360° panoramas with motion parallax.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 66599
OmniPhotos: Casual 360° VR Photography
Virtual reality headsets are becoming increasingly popular, yet it remains difficult for casual users to capture immersive 360° VR panoramas. State-of-the-art approaches require capture times of usually far more than a minute and are often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for quickly and casually capturing high-quality 360° panoramas with motion parallax. Our approach requires a single sweep with a consumer 360° video camera as input, which takes less than 3 seconds to capture with a rotating selfie stick or 10 seconds handheld. This is the fastest capture time for any VR photography approach supporting motion parallax by an order of magnitude. We improve the visual rendering quality of our OmniPhotos by alleviating vertical distortion using a novel deformable proxy geometry, which we fit to a sparse 3D reconstruction of captured scenes. In addition, the 360° input views significantly expand the available viewing area, and thus the range of motion, compared to previous approaches. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 66599
- …