1,822 research outputs found
A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm
Biofilm is a formation of microbial material on tooth substrata. Several
methods to quantify dental biofilm coverage have recently been reported in the
literature, but at best they provide a semi-automated approach to
quantification with significant input from a human grader that comes with the
graders bias of what are foreground, background, biofilm, and tooth.
Additionally, human assessment indices limit the resolution of the
quantification scale; most commercial scales use five levels of quantification
for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current
state-of-the-art techniques in automatic plaque quantification fail to make
their way into practical applications owing to their inability to incorporate
human input to handle misclassifications. This paper proposes a new interactive
method for biofilm quantification in Quantitative light-induced fluorescence
(QLF) images of canine teeth that is independent of the perceptual bias of the
grader. The method partitions a QLF image into segments of uniform texture and
intensity called superpixels; every superpixel is statistically modeled as a
realization of a single 2D Gaussian Markov random field (GMRF) whose parameters
are estimated; the superpixel is then assigned to one of three classes
(background, biofilm, tooth substratum) based on the training set of data. The
quantification results show a high degree of consistency and precision. At the
same time, the proposed method gives pathologists full control to post-process
the automatic quantification by flipping misclassified superpixels to a
different state (background, tooth, biofilm) with a single click, providing
greater usability than simply marking the boundaries of biofilm and tooth as
done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics
2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image
segmentation;Manuals;Teeth},
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350
One Step before 3D Printing\u2014Evaluation of Imaging Software Accuracy for 3-Dimensional Analysis of the Mandible: A Comparative Study Using a Surface-to-Surface Matching Technique
Abstract: The accuracy of 3D reconstructions of the craniomaxillofacial region using cone beam
computed tomography (CBCT) is important for the morphological evaluation of specific anatomical
structures. Moreover, an accurate segmentation process is fundamental for the physical reconstruction
of the anatomy (3D printing) when a preliminary simulation of the therapy is required. In this
regard, the objective of this study is to evaluate the accuracy of four dierent types of software for the
semiautomatic segmentation of the mandibular jaw compared to manual segmentation, used as a
gold standard. Twenty cone beam computed tomography (CBCT) with a manual approach (Mimics)
and a semi-automatic approach (Invesalius, ITK-Snap, Dolphin 3D, Slicer 3D) were selected for the
segmentation of the mandible in the present study. The accuracy of semi-automatic segmentation was
evaluated: (1) by comparing the mandibular volumes obtained with semi-automatic 3D rendering and
manual segmentation and (2) by deviation analysis between the two mandibular models. An analysis
of variance (ANOVA) was used to evaluate dierences in mandibular volumetric recordings and for
a deviation analysis among the dierent software types used. Linear regression was also performed
between manual and semi-automatic methods. No significant dierences were found in the total
volumes among the obtained 3D mandibular models (Mimics = 40.85 cm3, ITK-Snap = 40.81 cm3,
Invesalius = 40.04 cm3, Dolphin 3D = 42.03 cm3, Slicer 3D = 40.58 cm3). High correlations were found
between the semi-automatic segmentation and manual segmentation approach, with R coecients
ranging from 0,960 to 0,992. According to the deviation analysis, the mandibular models obtained
with ITK-Snap showed the highest matching percentage (Tolerance A = 88.44%, Tolerance B = 97.30%),
while those obtained with Dolphin 3D showed the lowest matching percentage (Tolerance A = 60.01%,
Tolerance B = 87.76%) (p < 0.05). Colour-coded maps showed that the area of greatest mismatch
between semi-automatic and manual segmentation was the condylar region and the region proximate
to the dental roots. Despite the fact that the semi-automatic segmentation of the mandible showed,
in general, high reliability and high correlation with the manual segmentation, caution should be
taken when evaluating the morphological and dimensional characteristics of the condyles either on
CBCT-derived digital models or physical models (3D printing)
MSFormer: A Skeleton-multiview Fusion Method For Tooth Instance Segmentation
Recently, deep learning-based tooth segmentation methods have been limited by
the expensive and time-consuming processes of data collection and labeling.
Achieving high-precision segmentation with limited datasets is critical. A
viable solution to this entails fine-tuning pre-trained multiview-based models,
thereby enhancing performance with limited data. However, relying solely on
two-dimensional (2D) images for three-dimensional (3D) tooth segmentation can
produce suboptimal outcomes because of occlusion and deformation, i.e.,
incomplete and distorted shape perception. To improve this fine-tuning-based
solution, this paper advocates 2D-3D joint perception. The fundamental
challenge in employing 2D-3D joint perception with limited data is that the
3D-related inputs and modules must follow a lightweight policy instead of using
huge 3D data and parameter-rich modules that require extensive training data.
Following this lightweight policy, this paper selects skeletons as the 3D
inputs and introduces MSFormer, a novel method for tooth segmentation. MSFormer
incorporates two lightweight modules into existing multiview-based models: a
3D-skeleton perception module to extract 3D perception from skeletons and a
skeleton-image contrastive learning module to obtain the 2D-3D joint perception
by fusing both multiview and skeleton perceptions. The experimental results
reveal that MSFormer paired with large pre-trained multiview models achieves
state-of-the-art performance, requiring only 100 training meshes. Furthermore,
the segmentation accuracy is improved by 2.4%-5.5% with the increasing volume
of training data.Comment: Under revie
3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
Teeth localization, segmentation, and labeling from intra-oral 3D scans are
essential tasks in modern dentistry to enhance dental diagnostics, treatment
planning, and population-based studies on oral health. However, developing
automated algorithms for teeth analysis presents significant challenges due to
variations in dental anatomy, imaging protocols, and limited availability of
publicly accessible data. To address these challenges, the 3DTeethSeg'22
challenge was organized in conjunction with the International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022,
with a call for algorithms tackling teeth localization, segmentation, and
labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans
from 900 patients was prepared, and each tooth was individually annotated by a
human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this
dataset. In this study, we present the evaluation results of the 3DTeethSeg'22
challenge. The 3DTeethSeg'22 challenge code can be accessed at:
https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng
EXPERIMENTAL STUDY ON LIP AND SMILE DETECTION
This paper presents a lip and smile detection method based-on the normalized RGB chromaticity diagram. The method employs the popular Viola-Jones detection method to detect the face. To avoid the false positive, the eye detector is introduced in the detection stage. Only the face candidates with the detected eyes are considered as the face. Once the face is detected, the lip region is localized using the simple geometric rule. Further, the the red color thresholding based-on the normalized RGB chromaticity diagram is proposed to extract the lip. The projection technique is employed for detecting the smile state. From the experiment results, the proposed method achieves the lip detection rate of 97% and the smile detection rate of 94%.
Paper ini menyajikan medote pendeteksi bibir dan senyum berdasarkan diagram tingkat kromatis RGB ternormalisasi. Metode ini menggunakan metode Viola-Jones yang populer untuk mendeteksi wajah. Untuk menghindari kesalahan positif, detektor mata diperkenalkan pada tahapan deteksi. Hanya kandidat wajah dengan mata yang telah terdeteksi yang dianggap sebagai wajah. Setelah wajah dideteksi, bagian bibir ditempatkan dengan menggunakan aturan geometris sederhana. Selanjutnya, batasan warna merah berdasarkan pada diagram kromatisitas RGB ternormalisasi digunakan untuk mengekstrak bibir. Teknik proyeksi digunakan untuk mendeteksi keadaan tersenyum. Dari hasil percobaan, metode yang diusulkan mencapai 97% untuk tingkat deteksi bibir dan 94% untuk tingkat deteksi senyum
Automatic Generation of Facial Expression Using Triangular Geometric Deformation
AbstractThis paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The feature control points are labeled according to the feature points (FPs) defined by MPEG-4. After the designation of the deformation expression, the system also increases the image correction points based on the obtained FP coordinates. The FPs are utilized as image deformation units by triangular segmentation. The triangle is split into two vectors. The triangle points are regarded as linear combinations of two vectors, and the coefficients of the linear combinations correspond to the triangular vectors of the original image. Next, the corresponding coordinates are obtained to complete the image correction by image interpolation technology to generate the new expression. As for the proposed deformation algorithm, 10 additional correction points are generated in the positions corresponding to the FPs obtained according to MPEG-4. Obtaining the correction points within a very short operation time is easy. Using a particular triangulation for deformation can extend the material area without narrowing the unwanted material area, thus saving the filling material operation in some areas
- …