9 research outputs found
Clinical Evaluation of Denture Retention by Multi-suction Cup and Denture Adhesive
AIM: The aim of the study was to compare the retention of two modalities: Multi-suction cup denture, and denture adhesive and to evaluate the change of retention by different time intervals.
PATIENTS AND METHODS: Twelve completely edentulous patients were selected. The patients received two dentures: One conventional denture, and the other with multi-suction cups. The retention was measured by a universal testing machine at insertion, 15 min, 30 min, 1 h, 2 h, and 4 h. All values were recorded in Newtons. Statistical analysis was carried out using two-way analysis of variance with post hoc Tukey’s test.
RESULTS: Retention was higher in denture adhesive than multi-suction cup, and the change of retention was not statistically significant by time.
CONCLUSION: Denture adhesive showed better retention clinically and simplified laboratory procedures than multi-suction denture
Automated Grain Boundary (GB) Segmentation and Microstructural Analysis in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy
Austenitic 347H stainless steel offers superior mechanical properties and
corrosion resistance required for extreme operating conditions such as high
temperature. The change in microstructure due to composition and process
variations is expected to impact material properties. Identifying
microstructural features such as grain boundaries thus becomes an important
task in the process-microstructure-properties loop. Applying convolutional
neural network (CNN) based deep-learning models is a powerful technique to
detect features from material micrographs in an automated manner. Manual
labeling of the images for the segmentation task poses a major bottleneck for
generating training data and labels in a reliable and reproducible way within a
reasonable timeframe. In this study, we attempt to overcome such limitations by
utilizing multi-modal microscopy to generate labels directly instead of manual
labeling. We combine scanning electron microscopy (SEM) images of 347H
stainless steel as training data and electron backscatter diffraction (EBSD)
micrographs as pixel-wise labels for grain boundary detection as a semantic
segmentation task. We demonstrate that despite producing instrumentation drift
during data collection between two modes of microscopy, this method performs
comparably to similar segmentation tasks that used manual labeling.
Additionally, we find that na\"ive pixel-wise segmentation results in small
gaps and missing boundaries in the predicted grain boundary map. By
incorporating topological information during model training, the connectivity
of the grain boundary network and segmentation performance is improved.
Finally, our approach is validated by accurate computation on downstream tasks
of predicting the underlying grain morphology distributions which are the
ultimate quantities of interest for microstructural characterization