372 research outputs found

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Artificial intelligence and automation in endoscopy and surgery

    Get PDF
    Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient’s anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery

    Machine learning models for diagnosis and prognosis of Parkinson's disease using brain imaging: general overview, main challenges, and future directions

    Get PDF
    Parkinson’s disease (PD) is a progressive and complex neurodegenerative disorder associated with age that affects motor and cognitive functions. As there is currently no cure, early diagnosis and accurate prognosis are essential to increase the effectiveness of treatment and control its symptoms. Medical imaging, specifically magnetic resonance imaging (MRI), has emerged as a valuable tool for developing support systems to assist in diagnosis and prognosis. The current literature aims to improve understanding of the disease’s structural and functional manifestations in the brain. By applying artificial intelligence to neuroimaging, such as deep learning (DL) and other machine learning (ML) techniques, previously unknown relationships and patterns can be revealed in this high-dimensional data. However, several issues must be addressed before these solutions can be safely integrated into clinical practice. This review provides a comprehensive overview of recent ML techniques analyzed for the automatic diagnosis and prognosis of PD in brain MRI. The main challenges in applying ML to medical diagnosis and its implications for PD are also addressed, including current limitations for safe translation into hospitals. These challenges are analyzed at three levels: disease-specific, task- specific, and technology-specific. Finally, potential future directions for each challenge and future perspectives are discusse

    Foundational Models in Medical Imaging: A Comprehensive Survey and Future Vision

    Full text link
    Foundation models, large-scale, pre-trained deep-learning models adapted to a wide range of downstream tasks have gained significant interest lately in various deep-learning problems undergoing a paradigm shift with the rise of these models. Trained on large-scale dataset to bridge the gap between different modalities, foundation models facilitate contextual reasoning, generalization, and prompt capabilities at test time. The predictions of these models can be adjusted for new tasks by augmenting the model input with task-specific hints called prompts without requiring extensive labeled data and retraining. Capitalizing on the advances in computer vision, medical imaging has also marked a growing interest in these models. To assist researchers in navigating this direction, this survey intends to provide a comprehensive overview of foundation models in the domain of medical imaging. Specifically, we initiate our exploration by providing an exposition of the fundamental concepts forming the basis of foundation models. Subsequently, we offer a methodical taxonomy of foundation models within the medical domain, proposing a classification system primarily structured around training strategies, while also incorporating additional facets such as application domains, imaging modalities, specific organs of interest, and the algorithms integral to these models. Furthermore, we emphasize the practical use case of some selected approaches and then discuss the opportunities, applications, and future directions of these large-scale pre-trained models, for analyzing medical images. In the same vein, we address the prevailing challenges and research pathways associated with foundational models in medical imaging. These encompass the areas of interpretability, data management, computational requirements, and the nuanced issue of contextual comprehension.Comment: The paper is currently in the process of being prepared for submission to MI

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    A Deep Learning Approach to Evaluating Disease Risk in Coronary Bifurcations

    Full text link
    Cardiovascular disease represents a large burden on modern healthcare systems, requiring significant resources for patient monitoring and clinical interventions. It has been shown that the blood flow through coronary arteries, shaped by the artery geometry unique to each patient, plays a critical role in the development and progression of heart disease. However, the popular and well tested risk models such as Framingham and QRISK3 current cardiovascular disease risk models are not able to take these differences when predicting disease risk. Over the last decade, medical imaging and image processing have advanced to the point that non-invasive high-resolution 3D imaging is routinely performed for any patient suspected of coronary artery disease. This allows for the construction of virtual 3D models of the coronary anatomy, and in-silico analysis of blood flow within the coronaries. However, several challenges still exist which preclude large scale patient-specific simulations, necessary for incorporating haemodynamic risk metrics as part of disease risk prediction. In particular, despite a large amount of available coronary medical imaging, extraction of the structures of interest from medical images remains a manual and laborious task. There is significant variation in how geometric features of the coronary arteries are measured, which makes comparisons between different studies difficult. Modelling blood flow conditions in the coronary arteries likewise requires manual preparation of the simulations and significant computational cost. This thesis aims to solve these challenges. The "Automated Segmentation of Coronary Arteries (ASOCA)" establishes a benchmark dataset of coronary arteries and their associated 3D reconstructions, which is currently the largest openly available dataset of coronary artery models and offers a wide range of applications such as computational modelling, 3D printed for experiments, developing, and testing medical devices such as stents, and Virtual Reality applications for education and training. An automated computational modelling workflow is developed to set up, run and postprocess simulations on the Left Main Bifurcation and calculate relevant shape metrics. A convolutional neural network model is developed to replace the computational fluid dynamics process, which can predict haemodynamic metrics such as wall shear stress in minutes, compared to several hours using traditional computational modelling reducing the computation and labour cost involved in performing such simulations

    Digital solutions for self-monitoring physical health and wellbeing during pregnancy

    Get PDF
    Perinatal disorders were among the top ten causes of global burden of disease in 2019. Better access to perinatal healthcare would help to reduce preventable morbidity. The increase in access to and use of smartphones presents a unique opportunity to transform and improve how women monitor their own health during pregnancy. This thesis aims to investigate the quality and usage of currently available pregnancy digital health tools for self-monitoring and to validate a newly developed, custom-built pregnancy self-monitoring tool. In Chapter 2, the most popular, commercially available pregnancy apps and their monitoring tools were evaluated for their quality by conducting a pregnancy app scoping review. In Chapters 3 and 4, pregnant women and healthcare professionals were surveyed and interviewed to better understand their usage of and attitudes towards digital health, as well as their thoughts about two hypothetical app features (a direct patient-to-healthcare professional communication tool and a novel body measurement tool). In Chapter 5, we test the performance of a first generation, custom-built body measurement tool (which we called BMT-1) by comparing the digital measurements extracted from photos taken on smartphones to physical measurements taken with measuring tape. The performance of BMT-1 was also assessed on a longitudinal set of digitally constructed pregnancy models. Collectively, the findings from Chapters 2, 3 and 4 provide evidence that there is both opportunity and scope for the development of new digital health tools to support and enhance the quality of care during pregnancy. The results from Chapter 5 indicate that BMT-1 successfully extracted body measurements from both photos and digitally constructed pregnancy models, though would require refinement before it could be launched. To finalise, in Chapter 6, I outline how these findings could help to guide the design, development and implementation of new pregnancy digital health tools
    • …
    corecore