204 research outputs found

    Analysis of Hydroquinone Content in Whitening Cream by Spectrophotometry UV-Vis Method

    Get PDF
    Hydroquinone is an ingredient that is often added in whitening creams with the aim of whitening the skin. According to BPOM Regulation No.23 of 2019, hydroquinone has been banned from its use as a bleach or lightener in cosmetics. The use of hydroquinone can only be used for nails with a level of 0.02%, as well as hair dye oxidizers with a maximum level of 0.3%. The mechanism of hydroquinone in whitening the skin is the toxic effect of hydroquinone on melanocytes, and through inhibition of the process of melanin formation. This study aims to identify and determine the levels of hydroquinone contained in whitening creams circulating in Pontianak City. The research method used in this study is descriptive non-experimental, where as many as 10 samples of whitening cream circulating in Pontianak City were obtained using purposive sampling techniques. Identification of hydroquinone is carried out qualitatively using FeCl3 1% reagent, and benedict reagent. Determination of hydroquinone levels in the sample is done quantitatively using a UV-Vis spectrophotometer. Based on the calculation of validation parameters, the equation y = 0.0251x – 0.0029, linearity value with relation coefficient (r) of 0.9999, %recovery of 100.15%, precision with a value of %RSD of 0.4899%, LOD of 0.2742 ppm and LOQ of 0.9140 ppm. Based on the results obtained, as many as 8 samples of positive whitening creams contained hydroquinone with a sample level of A of 3.1630%; sample B of 2.7322%; C sample of 2.5335%; sample D of 2.3216%; G sample is 2.5118%; H sample of 3.3171%; sample I was 2.4409 and sample J was 0.5571%

    Load-bearing capacity of screw-retained CAD/CAM-produced titanium implant frameworks (I-Bridge®2) before and after cyclic mechanical loading

    Get PDF
    Implant-supported screw-retained fixed dental prostheses (FDPs) produced by CAD/ CAM have been introduced in recent years for the rehabilitation of partial or total endentulous jaws. However, there is a lack of data about the long-term mechanical characteristics. OBJECTIVE: The aim of this study was to investigate the failure mode and the influence of extended cyclic mechanical loading on the load-bearing capacity of these frameworks. MATERIAL AND METHODS: Ten five-unit FDP frameworks simulating a free-end situation in the mandibular jaw were manufactured according to the I-Bridge®2-concept (I-Bridge®2, Biomain AB, Helsingborg, Sweden) and each was screw-retained on three differently angulated Astra Tech implants (30º buccal angulation/0º angulation/30º lingual angulation). One half of the specimens was tested for static load-bearing capacity without any further treatment (control), whereas the other half underwent five million cycles of mechanical loading with 100 N as the upper load limit (test). All specimens were loaded until failure in a universal testing machine with an occlusal force applied at the pontics. Load-displacement curves were recorded and the failure mode was macro- and microscopically analyzed. The statistical analysis was performed using a t-test (p=0.05). RESULTS: All the specimens survived cyclic mechanical loading and no obvious failure could be observed. Due to the cyclic mechanical loading, the load-bearing capacity decreased from 8,496 N±196 N (control) to 7,592 N±901 N (test). The cyclic mechanical loading did not significantly influence the load-bearing capacity (p=0.060). The failure mode was almost identical in all specimens: large deformations of the framework at the implant connection area were obvious. CONCLUSION: The load-bearing capacity of the I-Bridge®2 frameworks is much higher than the clinically relevant occlusal forces, even with considerably angulated implants. However, the performance under functional loading in vivo depends on additional aspects. Further studies are needed to address these aspects

    Fully-automated Body Composition Analysis in Routine CT Imaging Using 3D Semantic Segmentation Convolutional Neural Networks

    Full text link
    Body tissue composition is a long-known biomarker with high diagnostic and prognostic value in cardiovascular, oncological and orthopaedic diseases, but also in rehabilitation medicine or drug dosage. In this study, the aim was to develop a fully automated, reproducible and quantitative 3D volumetry of body tissue composition from standard CT examinations of the abdomen in order to be able to offer such valuable biomarkers as part of routine clinical imaging. Therefore an in-house dataset of 40 CTs for training and 10 CTs for testing were fully annotated on every fifth axial slice with five different semantic body regions: abdominal cavity, bones, muscle, subcutaneous tissue, and thoracic cavity. Multi-resolution U-Net 3D neural networks were employed for segmenting these body regions, followed by subclassifying adipose tissue and muscle using known hounsfield unit limits. The S{\o}rensen Dice scores averaged over all semantic regions was 0.9553 and the intra-class correlation coefficients for subclassified tissues were above 0.99. Our results show that fully-automated body composition analysis on routine CT imaging can provide stable biomarkers across the whole abdomen and not just on L3 slices, which is historically the reference location for analysing body composition in the clinical routine

    Clinical applications of PET/MRI: current status and future perspectives

    Get PDF
    Fully integrated positron emission tomography (PET)/magnetic resonance imaging (MRI) scanners have been available for a few years. Since then, the number of scanner installations and published studies have been growing. While feasibility of integrated PET/MRI has been demonstrated for many clinical and preclinical imaging applications, now those applications where PET/MRI provides a clear benefit in comparison to the established reference standards need to be identified. The current data show that those particular applications demanding multiparametric imaging capabilities, high soft tissue contrast and/or lower radiation dose seem to benefit from this novel hybrid modality. Promising results have been obtained in whole-body cancer staging in non-small cell lung cancer and multiparametric tumor imaging. Furthermore, integrated PET/MRI appears to have added value in oncologic applications requiring high soft tissue contrast such as assessment of liver metastases of neuroendocrine tumors or prostate cancer imaging. Potential benefit of integrated PET/MRI has also been demonstrated for cardiac (i.e., myocardial viability, cardiac sarcoidosis) and brain (i.e., glioma grading, Alzheimer’s disease) imaging, where MRI is the predominant modality. The lower radiation dose compared to PET/computed tomography will be particularly valuable in the imaging of young patients with potentially curable diseases. However, further clinical studies and technical innovation on scanner hard- and software are needed. Also, agreements on adequate refunding of PET/MRI examinations need to be reached. Finally, the translation of new PET tracers from preclinical evaluation into clinical applications is expected to foster the entire field of hybrid PET imaging, including PET/MRI

    k-strip: A novel segmentation algorithm in k-space for the application of skull stripping

    Full text link
    Objectives: Present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich k-space. Materials and Methods: Using two datasets from different institutions with a total of 36,900 MRI slices, we trained a deep learning-based model to work directly with the complex raw k-space data. Skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain were used as the ground truth. Results: Both datasets were very similar to the ground truth (DICE scores of 92\%-98\% and Hausdorff distances of under 5.5 mm). Results on slices above the eye-region reach DICE scores of up to 99\%, while the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-strip often smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. Conclusion: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.Comment: 11 pages, 6 figures, 2 table

    Open-Source Skull Reconstruction with MONAI

    Full text link
    We present a deep learning-based approach for skull reconstruction for MONAI, which has been pre-trained on the MUG500+ skull dataset. The implementation follows the MONAI contribution guidelines, hence, it can be easily tried out and used, and extended by MONAI users. The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework. Nowadays, open-sourcing software, especially (pre-trained) deep learning models, has become increasingly important. Over the years, medical image analysis experienced a tremendous transformation. Over a decade ago, algorithms had to be implemented and optimized with low-level programming languages, like C or C++, to run in a reasonable time on a desktop PC, which was not as powerful as today's computers. Nowadays, users have high-level scripting languages like Python, and frameworks like PyTorch and TensorFlow, along with a sea of public code repositories at hand. As a result, implementations that had thousands of lines of C or C++ code in the past, can now be scripted with a few lines and in addition executed in a fraction of the time. To put this even on a higher level, the Medical Open Network for Artificial Intelligence (MONAI) framework tailors medical imaging research to an even more convenient process, which can boost and push the whole field. The MONAI framework is a freely available, community-supported, open-source and PyTorch-based framework, that also enables to provide research contributions with pre-trained models to others. Codes and pre-trained weights for skull reconstruction are publicly available at: https://github.com/Project-MONAI/research-contributions/tree/master/SkullRe

    Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein

    Get PDF
    The precise preoperative calculation of functional liver volumes is essential prior major liver resections, as well as for the evaluation of a suitable donor for living donor liver transplantation. The aim of this study was to develop a fully automated, reproducible, and quantitative 3D volumetry of the liver from standard CT examinations of the abdomen as part of routine clinical imaging. Therefore, an in-house dataset of 100 venous phase CT examinations for training and 30 venous phase ex-house CT examinations with a slice thickness of 5 mm for testing and validating were fully annotated with right and left liver lobe. Multi-Resolution U-Net 3D neural networks were employed for segmenting these liver regions. The Sorensen-Dice coefficient was greater than 0.9726 +/- 0.0058, 0.9639 +/- 0.0088, and 0.9223 +/- 0.0187 and a mean volume difference of 32.12 +/- 19.40 ml, 22.68 +/- 21.67 ml, and 9.44 +/- 27.08 ml compared to the standard of reference (SoR) liver, right lobe, and left lobe annotation was achieved. Our results show that fully automated 3D volumetry of the liver on routine CT imaging can provide reproducible, quantitative, fast and accurate results without needing any examiner in the preoperative work-up for hepatobiliary surgery and especially for living donor liver transplantation.Projekt DEA

    Structural Similarity based Anatomical and Functional Brain Imaging Fusion

    Full text link
    Multimodal medical image fusion helps in combining contrasting features from two or more input imaging modalities to represent fused information in a single image. One of the pivotal clinical applications of medical image fusion is the merging of anatomical and functional modalities for fast diagnosis of malignant tissues. In this paper, we present a novel end-to-end unsupervised learning-based Convolutional Neural Network (CNN) for fusing the high and low frequency components of MRI-PET grayscale image pairs, publicly available at ADNI, by exploiting Structural Similarity Index (SSIM) as the loss function during training. We then apply color coding for the visualization of the fused image by quantifying the contribution of each input image in terms of the partial derivatives of the fused image. We find that our fusion and visualization approach results in better visual perception of the fused image, while also comparing favorably to previous methods when applying various quantitative assessment metrics.Comment: Accepted at MICCAI-MBIA 201

    Prediction of left lobe hypertrophy after right lobe radioembolization of the liver using a clinical data model with external validation

    Get PDF
    In cirrhotic patients with hepatocellular carcinoma (HCC), right-sided radioembolization (RE) with Yttrium-90-loaded microspheres is an established palliative therapy and can be considered a “curative intention” treatment when aiming for sequential tumor resection. To become surgical candidate, hypertrophy of the left liver lobe to > 40% (future liver remnant, FLR) is mandatory, which can develop after RE. The amount of radiation-induced shrinkage of the right lobe and compensatory hypertrophy of the left lobe is difficult for clinicians to predict. This study aimed to utilize machine learning to predict left lobe liver hypertrophy in patients with HCC and cirrhosis scheduled for right lobe RE, with external validation. The results revealed that machine learning can accurately predict relative and absolute volume changes of the left liver lobe after right lobe RE. This prediction algorithm could help to estimate the chances of conversion from palliative RE to curative major hepatectomy following significant FLR hypertrophy
    corecore