117 research outputs found
Influence of fixed orthodontic appliances on the change in oral Candida strains among adolescents
AbstractBackground/purposeThe aim of this study was to explore the presence and variability of oral Candida in adolescents before and during treatment with fixed orthodontic appliances.Materials and methodsA total of 50 patients aged 10–18 years old were randomly selected for this study. Microorganism samples were obtained prior to and after orthodontic treatment and identified by culture methods. Molecular biology techniques were used to investigate the samples further and the effect of the orthodontic appliance on oral pathogenic yeasts was studied longitudinally.ResultsThe percentage of patients with candidiasis and the total number of colony-forming units significantly increased 2 months after orthodontic treatment. Changes in the type of oral candidiasis prior to and after treatment were significant.ConclusionFixed orthodontic appliances can influence the growth of oral pathogenic yeasts among adolescents
Adaptive Graphical Model Network for 2D Handpose Estimation
In this paper, we propose a new architecture called Adaptive Graphical Model
Network (AGMN) to tackle the task of 2D hand pose estimation from a monocular
RGB image. The AGMN consists of two branches of deep convolutional neural
networks for calculating unary and pairwise potential functions, followed by a
graphical model inference module for integrating unary and pairwise potentials.
Unlike existing architectures proposed to combine DCNNs with graphical models,
our AGMN is novel in that the parameters of its graphical model are conditioned
on and fully adaptive to individual input images. Experiments show that our
approach outperforms the state-of-the-art method used in 2D hand keypoints
estimation by a notable margin on two public datasets.Comment: 30th British Machine Vision Conference (BMVC
CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer
Reconstructing personalized animatable head avatars has significant
implications in the fields of AR/VR. Existing methods for achieving explicit
face control of 3D Morphable Models (3DMM) typically rely on multi-view images
or videos of a single subject, making the reconstruction process complex.
Additionally, the traditional rendering pipeline is time-consuming, limiting
real-time animation possibilities. In this paper, we introduce CVTHead, a novel
approach that generates controllable neural head avatars from a single
reference image using point-based neural rendering. CVTHead considers the
sparse vertices of mesh as the point set and employs the proposed
Vertex-feature Transformer to learn local feature descriptors for each vertex.
This enables the modeling of long-range dependencies among all the vertices.
Experimental results on the VoxCeleb dataset demonstrate that CVTHead achieves
comparable performance to state-of-the-art graphics-based methods. Moreover, it
enables efficient rendering of novel human heads with various expressions, head
poses, and camera views. These attributes can be explicitly controlled using
the coefficients of 3DMMs, facilitating versatile and realistic animation in
real-time scenarios.Comment: WACV202
MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask Generation
Acquiring and annotating sufficient labeled data is crucial in developing
accurate and robust learning-based models, but obtaining such data can be
challenging in many medical image segmentation tasks. One promising solution is
to synthesize realistic data with ground-truth mask annotations. However, no
prior studies have explored generating complete 3D volumetric images with
masks. In this paper, we present MedGen3D, a deep generative framework that can
generate paired 3D medical images and masks. First, we represent the 3D medical
data as 2D sequences and propose the Multi-Condition Diffusion Probabilistic
Model (MC-DPM) to generate multi-label mask sequences adhering to anatomical
geometry. Then, we use an image sequence generator and semantic diffusion
refiner conditioned on the generated mask sequences to produce realistic 3D
medical images that align with the generated masks. Our proposed framework
guarantees accurate alignment between synthetic images and segmentation maps.
Experiments on 3D thoracic CT and brain MRI datasets show that our synthetic
data is both diverse and faithful to the original data, and demonstrate the
benefits for downstream segmentation tasks. We anticipate that MedGen3D's
ability to synthesize paired 3D medical images and masks will prove valuable in
training deep learning models for medical imaging tasks.Comment: Submitted to MICCAI 2023. Project Page:
https://krishan999.github.io/MedGen3D
Neo-sex chromosomes in the black muntjac recapitulate incipient evolution of mammalian sex chromosomes
The nascent neo-sex chromosomes of black muntjacs show that regulatory mutations could accelerate the degeneration of the Y chromosome and contribute to the further evolution of dosage compensation
PPT: token-Pruned Pose Transformer for monocular and multi-view human pose estimation
Recently, the vision transformer and its variants have played an increasingly
important role in both monocular and multi-view human pose estimation.
Considering image patches as tokens, transformers can model the global
dependencies within the entire image or across images from other views.
However, global attention is computationally expensive. As a consequence, it is
difficult to scale up these transformer-based methods to high-resolution
features and many views.
In this paper, we propose the token-Pruned Pose Transformer (PPT) for 2D
human pose estimation, which can locate a rough human mask and performs
self-attention only within selected tokens. Furthermore, we extend our PPT to
multi-view human pose estimation. Built upon PPT, we propose a new cross-view
fusion strategy, called human area fusion, which considers all human foreground
pixels as corresponding candidates. Experimental results on COCO and MPII
demonstrate that our PPT can match the accuracy of previous pose transformer
methods while reducing the computation. Moreover, experiments on Human 3.6M and
Ski-Pose demonstrate that our Multi-view PPT can efficiently fuse cues from
multiple views and achieve new state-of-the-art results.Comment: ECCV 2022. Code is available at https://github.com/HowieMa/PP
Identity-Aware Hand Mesh Estimation and Personalization from RGB Images
Reconstructing 3D hand meshes from monocular RGB images has attracted
increasing amount of attention due to its enormous potential applications in
the field of AR/VR. Most state-of-the-art methods attempt to tackle this task
in an anonymous manner. Specifically, the identity of the subject is ignored
even though it is practically available in real applications where the user is
unchanged in a continuous recording session. In this paper, we propose an
identity-aware hand mesh estimation model, which can incorporate the identity
information represented by the intrinsic shape parameters of the subject. We
demonstrate the importance of the identity information by comparing the
proposed identity-aware model to a baseline which treats subject anonymously.
Furthermore, to handle the use case where the test subject is unseen, we
propose a novel personalization pipeline to calibrate the intrinsic shape
parameters using only a few unlabeled RGB images of the subject. Experiments on
two large scale public datasets validate the state-of-the-art performance of
our proposed method.Comment: ECCV 2022. Github
https://github.com/deyingk/PersonalizedHandMeshEstimatio
Diffeomorphic Image Registration with Neural Velocity Field
Diffeomorphic image registration, offering smooth transformation and topology
preservation, is required in many medical image analysis tasks.Traditional
methods impose certain modeling constraints on the space of admissible
transformations and use optimization to find the optimal transformation between
two images. Specifying the right space of admissible transformations is
challenging: the registration quality can be poor if the space is too
restrictive, while the optimization can be hard to solve if the space is too
general. Recent learning-based methods, utilizing deep neural networks to learn
the transformation directly, achieve fast inference, but face challenges in
accuracy due to the difficulties in capturing the small local deformations and
generalization ability. Here we propose a new optimization-based method named
DNVF (Diffeomorphic Image Registration with Neural Velocity Field) which
utilizes deep neural network to model the space of admissible transformations.
A multilayer perceptron (MLP) with sinusoidal activation function is used to
represent the continuous velocity field and assigns a velocity vector to every
point in space, providing the flexibility of modeling complex deformations as
well as the convenience of optimization. Moreover, we propose a cascaded image
registration framework (Cas-DNVF) by combining the benefits of both
optimization and learning based methods, where a fully convolutional neural
network (FCN) is trained to predict the initial deformation, followed by DNVF
for further refinement. Experiments on two large-scale 3D MR brain scan
datasets demonstrate that our proposed methods significantly outperform the
state-of-the-art registration methods.Comment: WACV 202
Hybrid-CSR: Coupling Explicit and Implicit Shape Representation for Cortical Surface Reconstruction
We present Hybrid-CSR, a geometric deep-learning model that combines explicit
and implicit shape representations for cortical surface reconstruction.
Specifically, Hybrid-CSR begins with explicit deformations of template meshes
to obtain coarsely reconstructed cortical surfaces, based on which the oriented
point clouds are estimated for the subsequent differentiable poisson surface
reconstruction. By doing so, our method unifies explicit (oriented point
clouds) and implicit (indicator function) cortical surface reconstruction.
Compared to explicit representation-based methods, our hybrid approach is more
friendly to capture detailed structures, and when compared with implicit
representation-based methods, our method can be topology aware because of
end-to-end training with a mesh-based deformation module. In order to address
topology defects, we propose a new topology correction pipeline that relies on
optimization-based diffeomorphic surface registration. Experimental results on
three brain datasets show that our approach surpasses existing implicit and
explicit cortical surface reconstruction methods in numeric metrics in terms of
accuracy, regularity, and consistency
- …