11 research outputs found

    High-Quality Animatable Dynamic Garment Reconstruction from Monocular Videos

    Full text link
    Much progress has been made in reconstructing garments from an image or a video. However, none of existing works meet the expectations of digitizing high-quality animatable dynamic garments that can be adjusted to various unseen poses. In this paper, we propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data. To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network that formulates the garment reconstruction task as a pose-driven deformation problem. To alleviate the ambiguity estimating 3D garments from monocular videos, we design a multi-hypothesis deformation module that learns spatial representations of multiple plausible deformations. Experimental results on several public datasets demonstrate that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses. The code will be provided for research purposes

    Infinitely many solutions to quasilinear Schrödinger equations with critical exponent

    Get PDF
    This paper is concerned with the following quasilinear Schrödinger equations with critical exponent: −∆pu + V(x)|u| p−2u − ∆p(|u| 2ω)|u| 2ω−2u = ak(x)|u| q−2u + b|u| 2ωp ∗−2u, x ∈ R N. Here ∆pu = div(|∇u| p−2∇u) is the p-Laplacian operator with 1 < p < N, p N p N−p is the critical Sobolev exponent. 1 ≤ 2ω < q < 2ωp, a and b are suitable positive parameters, V ∈ C(RN, [0, ∞)), k ∈ C(RN, R). With the help of the concentration-compactness principle and R. Kajikiya’s new version of symmetric Mountain Pass Lemma, we obtain infinitely many solutions which tend to zero under mild assumptions on V and k

    Infinitely many solutions to quasilinear Schrödinger equations with critical exponent

    Get PDF
    This paper is concerned with the following quasilinear Schrödinger equations with critical exponent: \begin{equation*}\label{eqS0.1} - \Delta _p u+ V(x)|u|^{p-2}u - \Delta _p(|u|^{2\omega}) |u|^{2\omega-2}u = a k(x)|u|^{q-2}u+b |u|^{2\omega p^{*}-2}u,\qquad x\in\mathbb{R}^N. \end{equation*} Here Δpu=div(∣∇u∣p−2∇u)\Delta _p u =\mathrm{div}(|\nabla u|^{p-2}\nabla u) is the pp-Laplacian operator with 1<p<N1< p < N, p∗=NpN−pp^* =\frac{Np}{N-p} is the critical Sobolev exponent. 1≤2ω<q<2ωp,1\le 2\omega<q<2\omega p, aa and b b are suitable positive parameters, V∈C(RN,[0,∞)),V \in C(\mathbb{R}^N, [0, \infty) ), k∈C(RN,R) k\in C(\mathbb{R}^N,\mathbb{R}). With the help of the concentration-compactness principle and R. Kajikiya's new version of symmetric Mountain Pass Lemma, we obtain infinitely many solutions which tend to zero under mild assumptions on VV and kk

    High-quality animatable dynamic garment reconstruction from monocular videos

    Get PDF
    Much progress has been made in reconstructing garments from an image or a video. However, none of existing works meet the expectations of digitizing high-quality animatable dynamic garments that can be adjusted to various unseen poses. In this paper, we propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data. To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network that formulates the garment reconstruction task as a pose-driven deformation problem. To alleviate the ambiguity estimating 3D garments from monocular videos, we design a multi-hypothesis deformation module that learns spatial representations of multiple plausible deformations. Experimental results on several public datasets demonstrate that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses. The code will be provided for research purposes

    Learning to infer inner-body under clothing from monocular video

    Get PDF
    Accurately estimating the human inner-body under clothing is very important for body measurement, virtual try-on and VR/AR applications. In this paper, we propose the first method to allow everyone to easily reconstruct their own 3D inner-body under daily clothing from a self-captured video with the mean reconstruction error of 0.73 cm within 15 s. This avoids privacy concerns arising from nudity or minimal clothing. Specifically, we propose a novel two-stage framework with a Semantic-guided Undressing Network (SUNet) and an Intra-Inter Transformer Network (IITNet). SUNet learns semantically related body features to alleviate the complexity and uncertainty of directly estimating 3D inner-bodies under clothing. IITNet reconstructs the 3D inner-body model by making full use of intra-frame and inter-frame information, which addresses the misalignment of inconsistent poses in different frames. Experimental results on both public datasets and our collected dataset demonstrate the effectiveness of the proposed method. The code and dataset is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/Inner-Body

    Image-guided human reconstruction via multi-scale graph transformation networks

    Get PDF
    3D human reconstruction from a single image is a challenging problem. Existing methods have difficulties to infer 3D clothed human models with consistent topologies for various poses. In this paper, we propose an efficient and effective method using a hierarchical graph transformation network. To deal with large deformations and avoid distorted geometries, rather than using Euclidean coordinates directly, 3D human shapes are represented by a vertex-based deformation representation that effectively encodes the deformation and copes well with large deformations. To infer a 3D human mesh consistent with the input real image, we also use a perspective projection layer to incorporate perceptual image features into the deformation representation. Our model is easy to train and fast to converge with short test time. Besides, we present the D2Human (Dynamic Detailed Human) dataset, including variously posed 3D human meshes with consistent topologies and rich geometry details, together with the captured color images and SMPL models, which is useful for training and evaluation of deep frameworks, particularly for graph neural networks. Experimental results demonstrate that our method achieves more plausible and complete 3D human reconstruction from a single image, compared with several state-of-the-art methods. The code and dataset are available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/MGTnet

    Multi-target landmark detection with incomplete images via reinforcement learning and shape prior embedding

    No full text
    Medical images are generally acquired with limited field-of-view (FOV), which could lead to incomplete regions of interest (ROI), and thus impose a great challenge on medical image analysis. This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets. Based on learning a navigation policy, instead of predicting targets directly, reinforcement learning (RL)-based methods have the potential to tackle this challenge in an efficient manner. Inspired by this, in this work we propose a multi-agent RL framework for simultaneous multi-target landmark detection. This framework is aimed to learn from incomplete or (and) complete images to form an implicit knowledge of global structure, which is consolidated during the training stage for the detection of targets from either complete or incomplete test images. To further explicitly exploit the global structural information from incomplete images, we propose to embed a shape model into the RL process. With this prior knowledge, the proposed RL model can not only localize dozens of targets simultaneously, but also work effectively and robustly in the presence of incomplete images. We validated the applicability and efficacy of the proposed method on various multi-target detection tasks with incomplete images from practical clinics, using body dual-energy X-ray absorptiometry (DXA), cardiac MRI and head CT datasets. Results showed that our method could predict whole set of landmarks with incomplete training images up to 80% missing proportion (average distance error 2.29 cm on body DXA), and could detect unseen landmarks in regions with missing image information outside FOV of target images (average distance error 6.84 mm on 3D half-head CT). Our code will be released via https://zmiclab.github.io/projects.html.</p

    iTRAQ-Based Proteomic Analysis reveals possible target-related proteins and signal networks in human osteoblasts overexpressing FGFR2

    Get PDF
    Abstract Background Fibroblast growth factor receptor 2 (FGFR2) play a vital role in skeletogenesis. However, the molecular mechanisms triggered by FGFR2 in osteoblasts are still not fully understood. In this study, proteomics and bioinformatics analysis were performed to investigate changes in the protein profiles regulated by FGFR2, with the goal of characterizing the molecular mechanisms of FGFR2 function in osteoblasts. Methods In this study, FGFR2-overexpression cell line was established using the lentivirus-packaging vector in human osteoblasts (hFOB1.19). Next, the isobaric tags for relative and absolute quantitation (iTRAQ) in combination with the liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was used to compare the proteomic changes between control and FGFR2-overexpression cells. Thresholds (fold-change of ≥ 1.5 and a P-value of < 0.05) were selected to determine differentially expressed proteins (DEPs). The bioinformatics analysis including GO and pathway analysis were done to identify the key pathways underlying the molecular mechanism. Results A Total of 149 DEPs was identified. The DEPs mainly located within organelles and involved in protein binding and extracellular regulation of signal transduction. ColI, TNC, FN1 and CDKN1A were strikingly downregulated while UBE2E3, ADNP2 and HSP70 were significantly upregulated in FGFR2-overexpression cells. KEEG analysis suggested the key pathways included cell death, PI3K-Akt signaling, focal adhesion and cell cycle. Conclusions To our knowledge, this is the first protomic research to investigate alterations in protein levels and affected pathways in FGFR2-overexpression osteoblasts. Thus, this study not only provides a comprehensive dataset on overall protein changes regulated by FGFR2, but also shed light on its potential molecular mechanism in human osteoblasts
    corecore