123 research outputs found

    Influence of fixed orthodontic appliances on the change in oral Candida strains among adolescents

    Get PDF
    AbstractBackground/purposeThe aim of this study was to explore the presence and variability of oral Candida in adolescents before and during treatment with fixed orthodontic appliances.Materials and methodsA total of 50 patients aged 10–18 years old were randomly selected for this study. Microorganism samples were obtained prior to and after orthodontic treatment and identified by culture methods. Molecular biology techniques were used to investigate the samples further and the effect of the orthodontic appliance on oral pathogenic yeasts was studied longitudinally.ResultsThe percentage of patients with candidiasis and the total number of colony-forming units significantly increased 2 months after orthodontic treatment. Changes in the type of oral candidiasis prior to and after treatment were significant.ConclusionFixed orthodontic appliances can influence the growth of oral pathogenic yeasts among adolescents

    Adaptive Graphical Model Network for 2D Handpose Estimation

    Full text link
    In this paper, we propose a new architecture called Adaptive Graphical Model Network (AGMN) to tackle the task of 2D hand pose estimation from a monocular RGB image. The AGMN consists of two branches of deep convolutional neural networks for calculating unary and pairwise potential functions, followed by a graphical model inference module for integrating unary and pairwise potentials. Unlike existing architectures proposed to combine DCNNs with graphical models, our AGMN is novel in that the parameters of its graphical model are conditioned on and fully adaptive to individual input images. Experiments show that our approach outperforms the state-of-the-art method used in 2D hand keypoints estimation by a notable margin on two public datasets.Comment: 30th British Machine Vision Conference (BMVC

    CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer

    Full text link
    Reconstructing personalized animatable head avatars has significant implications in the fields of AR/VR. Existing methods for achieving explicit face control of 3D Morphable Models (3DMM) typically rely on multi-view images or videos of a single subject, making the reconstruction process complex. Additionally, the traditional rendering pipeline is time-consuming, limiting real-time animation possibilities. In this paper, we introduce CVTHead, a novel approach that generates controllable neural head avatars from a single reference image using point-based neural rendering. CVTHead considers the sparse vertices of mesh as the point set and employs the proposed Vertex-feature Transformer to learn local feature descriptors for each vertex. This enables the modeling of long-range dependencies among all the vertices. Experimental results on the VoxCeleb dataset demonstrate that CVTHead achieves comparable performance to state-of-the-art graphics-based methods. Moreover, it enables efficient rendering of novel human heads with various expressions, head poses, and camera views. These attributes can be explicitly controlled using the coefficients of 3DMMs, facilitating versatile and realistic animation in real-time scenarios.Comment: WACV202

    MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask Generation

    Full text link
    Acquiring and annotating sufficient labeled data is crucial in developing accurate and robust learning-based models, but obtaining such data can be challenging in many medical image segmentation tasks. One promising solution is to synthesize realistic data with ground-truth mask annotations. However, no prior studies have explored generating complete 3D volumetric images with masks. In this paper, we present MedGen3D, a deep generative framework that can generate paired 3D medical images and masks. First, we represent the 3D medical data as 2D sequences and propose the Multi-Condition Diffusion Probabilistic Model (MC-DPM) to generate multi-label mask sequences adhering to anatomical geometry. Then, we use an image sequence generator and semantic diffusion refiner conditioned on the generated mask sequences to produce realistic 3D medical images that align with the generated masks. Our proposed framework guarantees accurate alignment between synthetic images and segmentation maps. Experiments on 3D thoracic CT and brain MRI datasets show that our synthetic data is both diverse and faithful to the original data, and demonstrate the benefits for downstream segmentation tasks. We anticipate that MedGen3D's ability to synthesize paired 3D medical images and masks will prove valuable in training deep learning models for medical imaging tasks.Comment: Submitted to MICCAI 2023. Project Page: https://krishan999.github.io/MedGen3D

    Neo-sex chromosomes in the black muntjac recapitulate incipient evolution of mammalian sex chromosomes

    Get PDF
    The nascent neo-sex chromosomes of black muntjacs show that regulatory mutations could accelerate the degeneration of the Y chromosome and contribute to the further evolution of dosage compensation

    PPT: token-Pruned Pose Transformer for monocular and multi-view human pose estimation

    Full text link
    Recently, the vision transformer and its variants have played an increasingly important role in both monocular and multi-view human pose estimation. Considering image patches as tokens, transformers can model the global dependencies within the entire image or across images from other views. However, global attention is computationally expensive. As a consequence, it is difficult to scale up these transformer-based methods to high-resolution features and many views. In this paper, we propose the token-Pruned Pose Transformer (PPT) for 2D human pose estimation, which can locate a rough human mask and performs self-attention only within selected tokens. Furthermore, we extend our PPT to multi-view human pose estimation. Built upon PPT, we propose a new cross-view fusion strategy, called human area fusion, which considers all human foreground pixels as corresponding candidates. Experimental results on COCO and MPII demonstrate that our PPT can match the accuracy of previous pose transformer methods while reducing the computation. Moreover, experiments on Human 3.6M and Ski-Pose demonstrate that our Multi-view PPT can efficiently fuse cues from multiple views and achieve new state-of-the-art results.Comment: ECCV 2022. Code is available at https://github.com/HowieMa/PP

    Identity-Aware Hand Mesh Estimation and Personalization from RGB Images

    Full text link
    Reconstructing 3D hand meshes from monocular RGB images has attracted increasing amount of attention due to its enormous potential applications in the field of AR/VR. Most state-of-the-art methods attempt to tackle this task in an anonymous manner. Specifically, the identity of the subject is ignored even though it is practically available in real applications where the user is unchanged in a continuous recording session. In this paper, we propose an identity-aware hand mesh estimation model, which can incorporate the identity information represented by the intrinsic shape parameters of the subject. We demonstrate the importance of the identity information by comparing the proposed identity-aware model to a baseline which treats subject anonymously. Furthermore, to handle the use case where the test subject is unseen, we propose a novel personalization pipeline to calibrate the intrinsic shape parameters using only a few unlabeled RGB images of the subject. Experiments on two large scale public datasets validate the state-of-the-art performance of our proposed method.Comment: ECCV 2022. Github https://github.com/deyingk/PersonalizedHandMeshEstimatio
    • …
    corecore