6,755 research outputs found
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
Towards the high-fidelity multidisciplinary design optimization of a 3d composite material hydrofoil
The development of a multidisciplinary design optimization (MDO) architecture for high-fidelity fluid-structure interaction (FSI) problems is presented with preliminary application to a NACA 0009 3D hydrofoil in metal and carbon-fiber reinforced plastic materials. The MDO methodology and FSI benchmark solution are presented and discussed. The computational cost of the MDO is reduced by performing a design space dimensionality reduction beforehand and integrating into the architecture a variable level of coupling between disciplines, a surrogate model, and an adaptive sampling technique. The optimization is performed using a heuristic global derivative-free algorithm. The MDO method is demonstrated by application to an analytical test problem. Current stage of research includes preliminary test problem optimization, validation of the hydrofoil FSI against experimental data, and design space assessment and dimensionality reduction for the hydrofoil model
Exploring the fitness landscape of a realistic turbofan rotor blade optimization
Aerodynamic shape optimization has established itself as a valuable tool in
the engineering design process to achieve highly efficient results. A central
aspect for such approaches is the mapping from the design parameters which
encode the geometry of the shape to be improved to the quality criteria which
describe its performance. The choices to be made in the setup of the
optimization process strongly influence this mapping and thus are expected to
have a profound influence on the achievable result. In this work we explore the
influence of such choices on the effects on the shape optimization of a
turbofan rotor blade as it can be realized within an aircraft engine design
process. The blade quality is assessed by realistic three dimensional
computational fluid dynamics (CFD) simulations.
We investigate the outcomes of several optimization runs which differ in
various configuration options, such as optimization algorithm, initialization,
number of degrees of freedom for the parametrization. For all such variations,
we generally find that the achievable improvement of the blade quality is
comparable for most settings and thus rather insensitive to the details of the
setup.
On the other hand, even supposedly minor changes in the settings, such as
using a different random seed for the initialization of the optimizer
algorithm, lead to very different shapes. Optimized shapes which show
comparable performance usually differ quite strongly in their geometries over
the complete blade.
Our analyses indicate that the fitness landscape for such a realistic
turbofan rotor blade optimization is highly multi-modal with many local optima,
where very different shapes show similar performance
Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization
Richter A. Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization. Bielefeld: Universität Bielefeld; 2019.Andreas Richter gratefully acknowledges the financial support from Honda Research Institute Europe (HRI-EU).This thesis targets efficient solutions for optimal representation setups for evolutionary design optimization problems. The representation maps the abstract parameters of an optimizer to a meaningful variation of the design model, e.g., the shape of a car. Thereby, it determines the convergence speed to and the quality of the final result. Thus, engineers are eager to employ well-tuned representations to achieve high-quality design solutions. But, setting up optimal representations is a cumbersome process because the setup procedure requires detailed knowledge about the objective functions, e.g., a fluid dynamics simulation, and the parameters of the employed representation itself. Thus, we target efficient routines to set up representations automatically to support engineers from their tedious, partly manual work.
Inspired by the concept of evolvability, we present novel quality criteria for the evaluation of linear deformations commonly applied as representations. We define and analyze the criteria variability, regularity, and improvement potential which measure the expected quality and convergence speed of an evolutionary design optimization process based on the linear deformation setup. Moreover, we target the efficient optimization of deformation setups with respect to these three criteria. In dynamic design optimization scenarios a suitable compromise between exploration and exploitation is crucial for efficient solutions. We discuss the construction of optimal compromises for these dynamic scenarios with our criteria because they characterize exploration and exploitation.
As a result an engineer can initialize and adjust the deformation setup for improved convergence speed of a design process and for enhanced quality of the design solutions with our methods
Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization
Richter A. Evolvability-guided Optimization of Linear Deformation Setups for Evolutionary Design Optimization. Bielefeld: Universität Bielefeld; 2019.Andreas Richter gratefully acknowledges the financial support from Honda Research Institute Europe (HRI-EU).This thesis targets efficient solutions for optimal representation setups for evolutionary design optimization problems. The representation maps the abstract parameters of an optimizer to a meaningful variation of the design model, e.g., the shape of a car. Thereby, it determines the convergence speed to and the quality of the final result. Thus, engineers are eager to employ well-tuned representations to achieve high-quality design solutions. But, setting up optimal representations is a cumbersome process because the setup procedure requires detailed knowledge about the objective functions, e.g., a fluid dynamics simulation, and the parameters of the employed representation itself. Thus, we target efficient routines to set up representations automatically to support engineers from their tedious, partly manual work.
Inspired by the concept of evolvability, we present novel quality criteria for the evaluation of linear deformations commonly applied as representations. We define and analyze the criteria variability, regularity, and improvement potential which measure the expected quality and convergence speed of an evolutionary design optimization process based on the linear deformation setup. Moreover, we target the efficient optimization of deformation setups with respect to these three criteria. In dynamic design optimization scenarios a suitable compromise between exploration and exploitation is crucial for efficient solutions. We discuss the construction of optimal compromises for these dynamic scenarios with our criteria because they characterize exploration and exploitation.
As a result an engineer can initialize and adjust the deformation setup for improved convergence speed of a design process and for enhanced quality of the design solutions with our methods
Shape Animation with Combined Captured and Simulated Dynamics
We present a novel volumetric animation generation framework to create new
types of animations from raw 3D surface or point cloud sequence of captured
real performances. The framework considers as input time incoherent 3D
observations of a moving shape, and is thus particularly suitable for the
output of performance capture platforms. In our system, a suitable virtual
representation of the actor is built from real captures that allows seamless
combination and simulation with virtual external forces and objects, in which
the original captured actor can be reshaped, disassembled or reassembled from
user-specified virtual physics. Instead of using the dominant surface-based
geometric representation of the capture, which is less suitable for volumetric
effects, our pipeline exploits Centroidal Voronoi tessellation decompositions
as unified volumetric representation of the real captured actor, which we show
can be used seamlessly as a building block for all processing stages, from
capture and tracking to virtual physic simulation. The representation makes no
human specific assumption and can be used to capture and re-simulate the actor
with props or other moving scenery elements. We demonstrate the potential of
this pipeline for virtual reanimation of a real captured event with various
unprecedented volumetric visual effects, such as volumetric distortion,
erosion, morphing, gravity pull, or collisions
Fast Non-Rigid Radiance Fields from Monocularized Data
3D reconstruction and novel view synthesis of dynamic scenes from collectionsof single views recently gained increased attention. Existing work showsimpressive results for synthetic setups and forward-facing real-world data, butis severely limited in the training speed and angular range for generatingnovel views. This paper addresses these limitations and proposes a new methodfor full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At thecore of our method are: 1) An efficient deformation module that decouples theprocessing of spatial and temporal information for acceleration at training andinference time; and 2) A static module representing the canonical scene as afast hash-encoded neural radiance field. We evaluate the proposed approach onthe established synthetic D-NeRF benchmark, that enables efficientreconstruction from a single monocular view per time-frame randomly sampledfrom a full hemisphere. We refer to this form of inputs as monocularized data.To prove its practicality for real-world scenarios, we recorded twelvechallenging sequences with human actors by sampling single frames from asynchronized multi-view rig. In both cases, our method is trained significantlyfaster than previous methods (minutes instead of days) while achieving highervisual accuracy for generated novel views. Our source code and data isavailable at our project pagehttps://graphics.tu-bs.de/publications/kappel2022fast.<br
Fast Non-Rigid Radiance Fields from Monocularized Data
The reconstruction and novel view synthesis of dynamic scenes recently gained
increased attention. As reconstruction from large-scale multi-view data
involves immense memory and computational requirements, recent benchmark
datasets provide collections of single monocular views per timestamp sampled
from multiple (virtual) cameras. We refer to this form of inputs as
"monocularized" data. Existing work shows impressive results for synthetic
setups and forward-facing real-world data, but is often limited in the training
speed and angular range for generating novel views. This paper addresses these
limitations and proposes a new method for full 360{\deg} inward-facing novel
view synthesis of non-rigidly deforming scenes. At the core of our method are:
1) An efficient deformation module that decouples the processing of spatial and
temporal information for accelerated training and inference; and 2) A static
module representing the canonical scene as a fast hash-encoded neural radiance
field. In addition to existing synthetic monocularized data, we systematically
analyze the performance on real-world inward-facing scenes using a newly
recorded challenging dataset sampled from a synchronized large-scale multi-view
rig. In both cases, our method is significantly faster than previous methods,
converging in less than 7 minutes and achieving real-time framerates at 1K
resolution, while obtaining a higher visual accuracy for generated novel views.
Our source code and data is available at our project page
https://graphics.tu-bs.de/publications/kappel2022fast.Comment: 18 pages, 14 figures; project page:
https://graphics.tu-bs.de/publications/kappel2022fas
- …