865 research outputs found

    {HiFECap}: {M}onocular High-Fidelity and Expressive Capture of Human Performances

    Get PDF
    Monocular 3D human performance capture is indispensable for many applicationsin computer graphics and vision for enabling immersive experiences. However,detailed capture of humans requires tracking of multiple aspects, including theskeletal pose, the dynamic surface, which includes clothing, hand gestures aswell as facial expressions. No existing monocular method allows joint trackingof all these components. To this end, we propose HiFECap, a new neural humanperformance capture approach, which simultaneously captures human pose,clothing, facial expression, and hands just from a single RGB video. Wedemonstrate that our proposed network architecture, the carefully designedtraining strategy, and the tight integration of parametric face and hand modelsto a template mesh enable the capture of all these individual aspects.Importantly, our method also captures high-frequency details, such as deformingwrinkles on the clothes, better than the previous works. Furthermore, we showthat HiFECap outperforms the state-of-the-art human performance captureapproaches qualitatively and quantitatively while for the first time capturingall aspects of the human.<br

    Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model

    Get PDF
    Capturing general deforming scenes is crucial for many computer graphics andvision applications, and it is especially challenging when only a monocular RGBvideo of the scene is available. Competing methods assume dense point tracks,3D templates, large-scale training datasets, or only capture small-scaledeformations. In contrast to those, our method, Ub4D, makes none of theseassumptions while outperforming the previous state of the art in challengingscenarios. Our technique includes two new, in the context of non-rigid 3Dreconstruction, components, i.e., 1) A coordinate-based and implicit neuralrepresentation for non-rigid scenes, which enables an unbiased reconstructionof dynamic scenes, and 2) A novel dynamic scene flow loss, which enables thereconstruction of larger deformations. Results on our new dataset, which willbe made publicly available, demonstrate the clear improvement over the state ofthe art in terms of surface reconstruction accuracy and robustness to largedeformations. Visit the project page https://4dqv.mpi-inf.mpg.de/Ub4D/.<br

    EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera

    No full text
    The high frame rate is a critical requirement for capturing fast human motions. In this setting, existing markerless image-based methods are constrained by the lighting requirement, the high data bandwidth and the consequent high computation overhead. In this paper, we propose EventCap --- the first approach for 3D capturing of high-speed human motions using a single event camera. Our method combines model-based optimization and CNN-based human pose detection to capture high-frequency motion details and to reduce the drifting in the tracking. As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos. Experiments on our new event-based fast human motion dataset demonstrate the effectiveness and accuracy of our method, as well as its robustness to challenging lighting conditions

    Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control

    Get PDF
    We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses. Our method is built upon recent neural scene representation and rendering works which learn representations of geometry and appearance from only 2D images. While existing works demonstrated compelling rendering of static scenes and playback of dynamic scenes, photo-realistic reconstruction and rendering of humans with neural implicit methods, in particular under user-controlled novel poses, is still difficult. To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose. A neural radiance field learns pose-dependent geometric deformations and pose- and view-dependent appearance effects in the canonical space from multi-view video input. To synthesize novel views of high fidelity dynamic geometry and appearance, we leverage 2D texture maps defined on the body model as latent variables for predicting residual deformations and the dynamic appearance. Experiments demonstrate that our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses. Furthermore, our method also supports body shape control of the synthesized results

    Virtual pathway explorer (viPEr) and pathway enrichment analysis tool (PEANuT): creating and analyzing focus networks to identify cross-talk between molecules and pathways

    No full text
    Background: Interpreting large-scale studies from microarrays or next-generation sequencing for further experimental testing remains one of the major challenges in quantitative biology. Combining expression with physical or genetic interaction data has already been successfully applied to enhance knowledge from all types of high-throughput studies. Yet, toolboxes for navigating and understanding even small gene or protein networks are poorly developed. Results: We introduce two Cytoscape plug-ins, which support the generation and interpretation of experiment-based interaction networks. The virtual pathway explorer viPEr creates so-called focus networks by joining a list of experimentally determined genes with the interactome of a specific organism. viPEr calculates all paths between two or more user-selected nodes, or explores the neighborhood of a single selected node. Numerical values from expression studies assigned to the nodes serve to score identified paths. The pathway enrichment analysis tool PEANuT annotates networks with pathway information from various sources and calculates enriched pathways between a focus and a background network. Using time series expression data of atorvastatin treated primary hepatocytes from six patients, we demonstrate the handling and applicability of viPEr and PEANuT. Based on our investigations using viPEr and PEANuT, we suggest a role of the FoxA1/A2/A3 transcriptional network in the cellular response to atorvastatin treatment. Moreover, we find an enrichment of metabolic and cancer pathways in the Fox transcriptional network and demonstrate a patient-specific reaction to the drug. Conclusions: The Cytoscape plug-in viPEr integrates -omics data with interactome data. It supports the interpretation and navigation of large-scale datasets by creating focus networks, facilitating mechanistic predictions from -omics studies. PEANuT provides an up-front method to identify underlying biological principles by calculating enriched pathways in focus networks

    {EventCap}: {M}onocular {3D} Capture of High-Speed Human Motions Using an Event Camera

    Get PDF

    1D numerical and experimental investigations of an ultralean pre-chamber engine

    Get PDF
    In recent years, lean-burn gasoline Spark-Ignition (SI) engines have been a major subject of investigations. With this solution, in fact, it is possible to simultaneously reduce NOx raw emissions and fuel consumption due to decreased heat losses, higher thermodynamic efficiency, and enhanced knock resistance. However, the real applicability of this technique is strongly limited by the increase in cyclic variation and the occurrence of misfire, which are typical for the combustion of homogeneous lean air/fuel mixtures. The employment of a Pre-Chamber (PC), in which the combustion begins before proceeding in the main combustion chamber, has already shown the capability of significantly extending the lean-burn limit. In this work, the potential of an ultralean PC SI engine for a decisive improvement of the thermal efficiency is presented by means of numerical and experimental analyses. The SI engine is experimentally investigated with and without the employment of the PC with the aim to analyze the real gain of this innovative combustion system. For both configurations, the engine is tested at various speeds, loads, and air-fuel ratios. A commercial gasoline fuel is directly injected into the Main Chamber (MC), while the PC is fed in a passive or active mode. Compressed Natural Gas (CNG) or Hydrogen (H2) is used in the actual case. A 1D model of the engine under study is implemented in a commercial modeling framework and is integrated with “in-house developed” sub-models for the simulation of the combustion and turbulence phenomena occurring in this unconventional engine. The numerical approach proves to reproduce the experimental data with good accuracy, without requiring any case-dependent tuning of the model constants. Both the numerical and experimental results show an improvement of the indicated thermal efficiency of the active PC, compared to the conventional ignition device, especially at high loads and low speeds. The injection of H2 into the PC leads to a significant benefit only with very lean mixtures. With the passive fueling of the PC, the lean-burn limit is less extended, with the consequent lower improvement potential for thermal efficiency
    • …
    corecore