1,156 research outputs found

    Raman Spectroscopy Techniques for the Detection and Management of Breast Cancer

    Get PDF
    Breast cancer has recently become the most common cancer worldwide, and with increased incidence, there is increased pressure on health services to diagnose and treat many more patients. Mortality and survival rates for this particular disease are better than other cancer types, and part of this is due to the facilitation of early diagnosis provided by screening programmes, including the National Health Service breast screening programme in the UK. Despite the benefits of the programme, some patients undergo negative experiences in the form of false negative mammograms, overdiagnosis and subsequent overtreatment, and even a small number of cancers are induced by the use of ionising radiation. In addition to this, false positive mammograms cause a large number of unnecessary biopsies, which means significant costs, both financially and in terms of clinicians' time, and discourages patients from attending further screening. Improvement in areas of the treatment pathway is also needed. Surgery is usually the first line of treatment for early breast cancer, with breast conserving surgery being the preferred option compared to mastectomy. This type of operation achieves the same outcome as mastectomy - removal of the tumour - while allowing the patient to retain the majority of their normal breast tissue for improved aesthetic and psychological results. Yet, re-excision operations are often required when clear margins are not achieved, i.e. not all of the tumour is removed. This again has implications on cost and time, and increases the risk to the patient through additional surgery. Currently lacking in both the screening and surgical contexts is the ability to discern specific chemicals present in the breast tissue being assessed/removed. Specifically relevant to mammography is the presence of calcifications, the chemistry of which holds information indicative of pathology that cannot be accessed through x-rays. In addition, the chemical composition of breast tumour tissue has been shown to be different to normal tissue in a variety of ways, with one particular difference being a significant increase in water content. Raman spectroscopy is a rapid, non-ionising, non-destructive technique based on light scattering. It has been proven to discern between chemical types of calcification and subtleties within their spectra that indicate the malignancy status of the surrounding tissue, and differentiate between cancerous and normal breast tissue based on the relative water contents. Furthermore, this thesis presents work aimed at exploring deep Raman techniques to probe breast calcifications at depth within tissue, and using a high wavenumber Raman probe to discriminate tumour from normal tissue predominantly via changes in tissue water content. The ability of transmission Raman spectroscopy to detect different masses and distributions of calcified powder inclusions within tissue phantoms was tested, as well as elucidating a signal profile of a similar inclusion through a tissue phantom of clinically relevant thickness. The technique was then applied to the measurement of clinically active samples of bulk breast tissue from informed and consented patients to try to measure calcifications. Ex vivo specimens were also measured with a high wavenumber Raman probe, which found significant differences between tumour and normal tissue, largely due to water content, resulting in a classification model that achieved 77.1% sensitivity and 90.8% specificity. While calcifications were harder to detect in the ex vivo specimens, promising results were still achieved, potentially indicating a much more widespread influence of calcification in breast tissue, and to obtain useful signal from bulk human tissue is encouraging in itself. Consequently, this work demonstrates the potential value of both deep Raman techniques and high wavenumber Raman for future breast screening and tumour margin assessment methods

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review

    Full text link
    Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis

    Artificial Intelligence: Development and Applications in Neurosurgery

    Get PDF
    The last decade has witnessed a significant increase in the relevance of artificial intelligence (AI) in neuroscience. Gaining notoriety from its potential to revolutionize medical decision making, data analytics, and clinical workflows, AI is poised to be increasingly implemented into neurosurgical practice. However, certain considerations pose significant challenges to its immediate and widespread implementation. Hence, this chapter will explore current developments in AI as it pertains to the field of clinical neuroscience, with a primary focus on neurosurgery. Additionally included is a brief discussion of important economic and ethical considerations related to the feasibility and implementation of AI-based technologies in neurosciences, including future horizons such as the operational integrations of human and non-human capabilities

    Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data

    Get PDF
    Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    The stochastic digital human is now enrolling for in silico imaging trials -- Methods and tools for generating digital cohorts

    Full text link
    Randomized clinical trials, while often viewed as the highest evidentiary bar by which to judge the quality of a medical intervention, are far from perfect. In silico imaging trials are computational studies that seek to ascertain the performance of a medical device by collecting this information entirely via computer simulations. The benefits of in silico trials for evaluating new technology include significant resource and time savings, minimization of subject risk, the ability to study devices that are not achievable in the physical world, allow for the rapid and effective investigation of new technologies and ensure representation from all relevant subgroups. To conduct in silico trials, digital representations of humans are needed. We review the latest developments in methods and tools for obtaining digital humans for in silico imaging studies. First, we introduce terminology and a classification of digital human models. Second, we survey available methodologies for generating digital humans with healthy and diseased status and examine briefly the role of augmentation methods. Finally, we discuss the trade-offs of four approaches for sampling digital cohorts and the associated potential for study bias with selecting specific patient distributions

    Performance Analysis of Clustering Algorithms in Brain Tumor Detection from PET Images

    Get PDF
    Brain metastases remain fatal and challenging, and their early detection is imperative. With the advancement in non-invasive imaging techniques, positron emission tomography, as a functional imaging, has been widely employed in oncological studies, including pathophysiological mechanisms of the tumors. While manual analysis and integration of dynamic 4D PET images are challenging and inefficient. Therefore, automated segmentation is adopted to improve the efficiency and accuracy. In recent years, clustering-based image segmentation has been gaining popularity in detecting tumors. This thesis applies three clustering-based algorithms to automatically identify and segment metastatic brain tumors from dynamic 4D PET images of mice. The clustering algorithms used include K-means and Gaussian mixture model clustering in combination with pre-processing principal component analysis, independent component analysis and post-processing connected component analysis. The performances of three clustering algorithms in execution time and accuracy were evaluated by the Jaccard index and validated by time activity curve. The results indicate that K-means clustering is the best-performing among the three clustering methods when combined with independent component analysis, and the post-processing method connected component analysis has significantly improved the performance of K-means clustering
    • 

    corecore