17 research outputs found
Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field
In this paper, we address the problem of simultaneous relighting and novel
view synthesis of a complex scene from multi-view images with a limited number
of light sources. We propose an analysis-synthesis approach called Relit-NeuLF.
Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first
leverages a two-plane light field representation to parameterize each ray in a
4D coordinate system, enabling efficient learning and inference. Then, we
recover the spatially-varying bidirectional reflectance distribution function
(SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to
map each ray to its SVBRDF components: albedo, normal, and roughness. Based on
the decomposed BRDF components and conditioning light directions, a RenderNet
learns to synthesize the color of the ray. To self-supervise the SVBRDF
decomposition, we encourage the predicted ray color to be close to the
physically-based rendering result using the microfacet model. Comprehensive
experiments demonstrate that the proposed method is efficient and effective on
both synthetic data and real-world human face data, and outperforms the
state-of-the-art results. We publicly released our code on GitHub. You can find
it here: https://github.com/oppo-us-research/RelitNeuLFComment: 10 page
Progressive Multi-view Human Mesh Recovery with Self-Supervision
To date, little attention has been given to multi-view 3D human mesh
estimation, despite real-life applicability (e.g., motion capture, sport
analysis) and robustness to single-view ambiguities. Existing solutions
typically suffer from poor generalization performance to new settings, largely
due to the limited diversity of image-mesh pairs in multi-view training data.
To address this shortcoming, people have explored the use of synthetic images.
But besides the usual impact of visual gap between rendered and target data,
synthetic-data-driven multi-view estimators also suffer from overfitting to the
camera viewpoint distribution sampled during training which usually differs
from real-world distributions. Tackling both challenges, we propose a novel
simulation-based training pipeline for multi-view human mesh recovery, which
(a) relies on intermediate 2D representations which are more robust to
synthetic-to-real domain gap; (b) leverages learnable calibration and
triangulation to adapt to more diversified camera setups; and (c) progressively
aggregates multi-view information in a canonical 3D space to remove ambiguities
in 2D representations. Through extensive benchmarking, we demonstrate the
superiority of the proposed solution especially for unseen in-the-wild
scenarios.Comment: Accepted by AAAI202
PREF: Predictability Regularized Neural Motion Fields
Knowing the 3D motions in a dynamic scene is essential to many vision
applications. Recent progress is mainly focused on estimating the activity of
some specific elements like humans. In this paper, we leverage a neural motion
field for estimating the motion of all points in a multiview setting. Modeling
the motion from a dynamic scene with multiview data is challenging due to the
ambiguities in points of similar color and points with time-varying color. We
propose to regularize the estimated motion to be predictable. If the motion
from previous frames is known, then the motion in the near future should be
predictable. Therefore, we introduce a predictability regularization by first
conditioning the estimated motion on latent embeddings, then by adopting a
predictor network to enforce predictability on the embeddings. The proposed
framework PREF (Predictability REgularized Fields) achieves on par or better
results than state-of-the-art neural motion field-based dynamic scene
representation methods, while requiring no prior knowledge of the scene.Comment: Accepted at ECCV 2022 (oral). Paper + supplementary materia
Study the Influence of Surface Morphology and Lubrication Pressure on Tribological Behavior of 316L–PTFE Friction Interface in High-Water-Based Fluid
Because of the low viscosity of high-water-based fluids, the intense wear and leakage of key friction pairs represent a bottleneck to the wide application of the high-water-based hydraulic motor in engineering machinery. In this work, based on the common characteristics of plane friction pairs, the friction experiments of a 316L stainless steel (316L)–polytetrafluoroethylene (PTFE) friction pair under various working condition were carried out by a self-designed friction experimental system with fluid lubrication. The influence of lubrication pressure and surface morphology on the 316L–PTFE friction pair was investigated both experimentally and theoretically. The experimental and numerical results indicated that increasing lubrication pressure reduced the surface wear of PTFE sample, but the leakage of 316L–PTFE friction pair also increased. It could not form an effective fluid lubrication film in the 316L–PTFE friction pair under low lubrication pressure, which caused the severe wear in friction pair interface. The smooth 316L surface could be conducive to the formation of high-water-based fluid lubrication film in 316L–PTFE friction interface. The pressure distribution of high-water-based fluid lubrication film in 316L–PTFE friction pair was also obtained in fluent. The PTFE surface was easily worn when the lubrication film in the friction pair was too thin or uneven. The friction and wear were obviously improved when the normal load was balanced by the bearing capacity of the high-water-based fluid lubrication film
Robust Knowledge Transfer via Hybrid Forward on the Teacher-Student Model
When adopting deep neural networks for a new vision task, a common practice is to start with fine-tuning some off-the-shelf well-trained network models from the community. Since a new task may require training a different network architecture with new domain data, taking advantage of off-the-shelf models is not trivial and generally requires considerable try-and-error and parameter tuning. In this paper, we denote a well-trained model as a teacher network and a model for the new task as a student network. We aim to ease the efforts of transferring knowledge from the teacher to the student network, robust to the gaps between their network architectures, domain data, and task definitions. Specifically, we propose a hybrid forward scheme in training the teacher-student models, alternately updating layer weights of the student model. The key merit of our hybrid forward scheme is on the dynamical balance between the knowledge transfer loss and task specific loss in training. We demonstrate the effectiveness of our method on a variety of tasks, e.g., model compression, segmentation, and detection, under a variety of knowledge transfer settings