3 research outputs found

    SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator

    Full text link
    Recent years have seen growing interest in 3D human faces modelling due to its wide applications in digital human, character generation and animation. Existing approaches overwhelmingly emphasized on modeling the exterior shapes, textures and skin properties of faces, ignoring the inherent correlation between inner skeletal structures and appearance. In this paper, we present SCULPTOR, 3D face creations with Skeleton Consistency Using a Learned Parametric facial generaTOR, aiming to facilitate easy creation of both anatomically correct and visually convincing face models via a hybrid parametric-physical representation. At the core of SCULPTOR is LUCY, the first large-scale shape-skeleton face dataset in collaboration with plastic surgeons. Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results. LUCY consists of 144 scans of 72 subjects (31 male and 41 female) where each subject has two CT scans taken pre- and post-orthognathic operations. Based on our LUCY dataset, we learn a novel skeleton consistent parametric facial generator, SCULPTOR, which can create the unique and nuanced facial features that help define a character and at the same time maintain physiological soundness. Our SCULPTOR jointly models the skull, face geometry and face appearance under a unified data-driven framework, by separating the depiction of a 3D face into shape blend shape, pose blend shape and facial expression blend shape. SCULPTOR preserves both anatomic correctness and visual realism in facial generation tasks compared with existing methods. Finally, we showcase the robustness and effectiveness of SCULPTOR in various fancy applications unseen before.Comment: 16 page, 13 fig

    Advancements and applications of single-cell multi-omics techniques in cancer research: Unveiling heterogeneity and paving the way for precision therapeutics

    No full text
    Single-cell multi-omics technologies have revolutionized cancer research by allowing us to examine individual cells at a molecular level. Unlike traditional bulk omics approaches, which analyze populations of cells together, single-cell multi-omics enables us to uncover the heterogeneity within tumors and understand the unique molecular characteristics of different cell populations. By doing so, we can identify rare subpopulations of cells that are influential in tumor growth, metastasis, and resistance to therapy. Moreover, single-cell multi-omics analysis provides valuable insights into the immune response triggered by various therapeutic interventions, such as immune checkpoint blockade, chemotherapy, and cell therapy. It also helps us better understand the intricate tumor microenvironment and its impact on patient prognosis and response to treatment. This comprehensive review focuses on the recent advancements in single-cell multi-omics methodologies, with an emphasis on single-cell multi-omics technologies. It highlights the important role of these techniques in uncovering the complexity of tumorigenesis and its multiple applications in cancer research, as well as their equally great contributions in other areas such as immunology. Through single-cell multi-omics, we gain a deeper understanding of cancer biology and pave the way for more precise and effective therapeutic strategies. Apart from those above, this paper also aims to introduce the advancements in live cell imaging technology, the latest developments in protein detection techniques, and explore their seamless integration with single-cell multi-omics technology

    NIMBLE: A Non-rigid Hand Model with Bones and Muscles

    Full text link
    Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world. While real human hand represents one of the most intricate coordination between bones, muscle, tendon, and skin, state-of-the-art techniques unanimously focus on modeling only the skeleton of the hand. In this paper, we present NIMBLE, a novel parametric hand model that includes the missing key components, bringing 3D hand model to a new level of realism. We first annotate muscles, bones and skins on the recent Magnetic Resonance Imaging hand (MRI-Hand) dataset and then register a volumetric template hand onto individual poses and subjects within the dataset. NIMBLE consists of 20 bones as triangular meshes, 7 muscle groups as tetrahedral meshes, and a skin mesh. Via iterative shape registration and parameter learning, it further produces shape blend shapes, pose blend shapes, and a joint regressor. We demonstrate applying NIMBLE to modeling, rendering, and visual inference tasks. By enforcing the inner bones and muscles to match anatomic and kinematic rules, NIMBLE can animate 3D hands to new poses at unprecedented realism. To model the appearance of skin, we further construct a photometric HandStage to acquire high-quality textures and normal maps to model wrinkles and palm print. Finally, NIMBLE also benefits learning-based hand pose and shape estimation by either synthesizing rich data or acting directly as a differentiable layer in the inference network.Comment: 17 pages, 18 figure
    corecore