189 research outputs found

    Towards fully automated third molar development staging in panoramic radiographs

    Get PDF
    Staging third molar development is commonly used for age assessment in sub-adults. Current staging techniques are, at most, semi-automated and rely on manual interactions prone to operator variability. The aim of this study was to fully automate the staging process by employing the full potential of deep learning, using convolutional neural networks (CNNs) in every step of the procedure. The dataset used to train the CNNs consisted of 400 panoramic radiographs (OPGs), with 20 OPGs per developmental stage per sex, staged in consensus between three observers. The concepts of transfer learning, using pre-trained CNNs, and data augmentation were used to mitigate the issues when dealing with a limited dataset. In this work, a three-step procedure was proposed and the results were validated using fivefold cross-validation. First, a CNN localized the geometrical center of the lower left third molar, around which a square region of interest (ROI) was extracted. Second, another CNN segmented the third molar within the ROI. Third, a final CNN used both the ROI and the segmentation to classify the third molar into its developmental stage. The geometrical center of the third molar was found with an average Euclidean distance of 63 pixels. Third molars were segmented with an average Dice score of 93%. Finally, the developmental stages were classified with an accuracy of 54%, a mean absolute error of 0.69 stages, and a linear weighted Cohen’s kappa coefficient of 0.79. The entire automated workflow on average took 2.72 s to compute, which is substantially faster than manual staging starting from the OPG. Taking into account the limited dataset size, this pilot study shows that the proposed fully automated approach shows promising results compared with manual staging.Internal Funds KU Leuvenhttp://link.springer.com/journal/4142021-04-01hj2020Anatom

    Magnetic resonance imaging for forensic age estimation in living children and young adults : a systematic review

    Get PDF
    Background The use of magnetic resonance imaging (MRI) in forensic age estimation has been explored extensively during the past decade. Objective To synthesize the available MRI data for forensic age estimation in living children and young adults, and to provide a comprehensive overview that can guide age estimation practice and future research. Materials and Methods MEDLINE, Embase and Web of Science were searched. Additionally, cited and citing articles and study registers were searched. Two authors independently selected articles, conducted data extraction, and assessed risk of bias. Study populations including living subjects up to 30 years were considered. Results Fifty-five studies were included in qualitative analysis and 33 in quantitative analysis. Most studies suffered from bias, including relatively small European (Caucasian) populations, varying MR-approaches and varying staging techniques. Therefore, pooling of the age distribution data was not appropriate. Reproducibility of staging was remarkably lower in clavicles than in any other anatomical structure. Age estimation performance was in line with the gold standard, which uses radiographs, with mean absolute errors ranging from 0.85 to 2.0 years. The proportion of correctly classified minors ranged from 65% to 91%. Multi-factorial age estimation performed better than based on a single anatomical site. Conclusion More multi-factorial age estimation studies are necessary, together with studies testing if the MRI data can safely be pooled. The current review results can guide future studies, help medical professionals to decide on the preferred approach for specific cases, and help judicial professionals to interpret the evidential value of age estimation results

    Improving the Clinical Use of Magnetic Resonance Spectroscopy for the Analysis of Brain Tumours using Machine Learning and Novel Post-Processing Methods

    Get PDF
    Magnetic Resonance Spectroscopy (MRS) provides unique and clinically relevant information for the assessment of several diseases. However, using the currently available tools, MRS processing and analysis is time-consuming and requires profound expert knowledge. For these two reasons, MRS did not gain general acceptance as a mainstream diagnostic technique yet, and the currently available clinical tools have seen little progress during the past years. MRS provides localized chemical information non-invasively, making it a valuable technique for the assessment of various diseases and conditions, namely brain, prostate and breast cancer, and metabolic diseases affecting the brain. In brain cancer, MRS is normally used for: (1.) differentiation between tumors and non-cancerous lesions, (2.) tumor typing and grading, (3.) differentiation between tumor-progression and radiation necrosis, and (4.) identification of tumor infiltration. Despite the value of MRS for these tasks, susceptibility differences associated with tissue-bone and tissue-air interfaces, as well as with the presence of post-operative paramagnetic particles, affect the quality of brain MR spectra and consequently reduce their clinical value. Therefore, the proper quality management of MRS acquisition and processing is essential to achieve unambiguous and reproducible results. In this thesis, special emphasis was placed on this topic. This thesis addresses some of the major problems that limit the use of MRS in brain tumors and focuses on the use of machine learning for the automation of the MRS processing pipeline and for assisting the interpretation of MRS data. Three main topics were investigated: (1.) automatic quality control of MRS data, (2.) identification of spectroscopic patterns characteristic of different tissue-types in brain tumors, and (3.) development of a new approach for the detection of tumor-related changes in GBM using MRSI data. The first topic tackles the problem of MR spectra being frequently affected by signal artifacts that obscure their clinical information content. Manual identification of these artifacts is subjective and is only practically feasible for single-voxel acquisitions and in case the user has an extensive experience with MRS. Therefore, the automatic distinction between data of good or bad quality is an essential step for the automation of MRS processing and routine reporting. The second topic addresses the difficulties that arise while interpreting MRS results: the interpretation requires expert knowledge, which is not available at every site. Consequently, the development of methods that enable the easy comparison of new spectra with known spectroscopic patterns is of utmost importance for clinical applications of MRS. The third and last topic focuses on the use of MRSI information for the detection of tumor-related effects in the periphery of brain tumors. Several research groups have shown that MRSI information enables the detection of tumor infiltration in regions where structural MRI appears normal. However, many of the approaches described in the literature make use of only a very limited amount of the total information contained in each MR spectrum. Thus, a better way to exploit MRSI information should enable an improvement in the detection of tumor borders, and consequently improve the treatment of brain tumor patients. The development of the methods described was made possible by a novel software tool for the combined processing of MRS and MRI: SpectrIm. This tool, which is currently distributed as part of the jMRUI software suite (www.jmrui.eu), is ubiquitous to all of the different methods presented and was one of the main outputs of the doctoral work. Overall, this thesis presents different methods that, when combined, enable the full automation of MRS processing and assist the analysis of MRS data in brain tumors. By allowing clinical users to obtain more information from MRS with less effort, this thesis contributes to the transformation of MRS into an important clinical tool that may be available whenever its information is of relevance for patient management

    eXplainable Artificial Intelligence (XAI) in aging clock models

    Full text link
    eXplainable Artificial Intelligence (XAI) is a rapidly progressing field of machine learning, aiming to unravel the predictions of complex models. XAI is especially required in sensitive applications, e.g. in health care, when diagnosis, recommendations and treatment choices might rely on the decisions made by artificial intelligence systems. AI approaches have become widely used in aging research as well, in particular, in developing biological clock models and identifying biomarkers of aging and age-related diseases. However, the potential of XAI here awaits to be fully appreciated. We discuss the application of XAI for developing the "aging clocks" and present a comprehensive analysis of the literature categorized by the focus on particular physiological systems

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Advanced Sensing and Image Processing Techniques for Healthcare Applications

    Get PDF
    This Special Issue aims to attract the latest research and findings in the design, development and experimentation of healthcare-related technologies. This includes, but is not limited to, using novel sensing, imaging, data processing, machine learning, and artificially intelligent devices and algorithms to assist/monitor the elderly, patients, and the disabled population

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF
    corecore