65 research outputs found
Applying machine learning to automated segmentation of head and neck tumour volumes and organs at risk on radiotherapy planning CT and MRI scans
Radiotherapy is one of the main ways head and neck cancers are treated;
radiation is used to kill cancerous cells and prevent their recurrence.
Complex treatment planning is required to ensure that enough radiation is given
to the tumour, and little to other sensitive structures (known as organs at risk)
such as the eyes and nerves which might otherwise be damaged. This is
especially difficult in the head and neck, where multiple at-risk structures often
lie in extremely close proximity to the tumour. It can take radiotherapy experts
four hours or more to pick out the important areas on planning scans (known as
segmentation).
This research will focus on applying machine learning algorithms to automatic
segmentation of head and neck planning computed tomography (CT) and
magnetic resonance imaging (MRI) scans at University College London
Hospital NHS Foundation Trust patients. Through analysis of the images used
in radiotherapy DeepMind Health will investigate improvements in efficiency of
cancer treatment pathways
A Distributed Trust Framework for Privacy-Preserving Machine Learning
When training a machine learning model, it is standard procedure for the
researcher to have full knowledge of both the data and model. However, this
engenders a lack of trust between data owners and data scientists. Data owners
are justifiably reluctant to relinquish control of private information to third
parties. Privacy-preserving techniques distribute computation in order to
ensure that data remains in the control of the owner while learning takes
place. However, architectures distributed amongst multiple agents introduce an
entirely new set of security and trust complications. These include data
poisoning and model theft. This paper outlines a distributed infrastructure
which is used to facilitate peer-to-peer trust between distributed agents;
collaboratively performing a privacy-preserving workflow. Our outlined
prototype sets industry gatekeepers and governance bodies as credential
issuers. Before participating in the distributed learning workflow, malicious
actors must first negotiate valid credentials. We detail a proof of concept
using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable
Credentials (VCs) to establish a distributed trust architecture during a
privacy-preserving machine learning experiment. Specifically, we utilise secure
and authenticated DID communication channels in order to facilitate a federated
learning workflow related to mental health care data.Comment: To be published in the proceedings of the 17th International
Conference on Trust, Privacy and Security in Digital Business - TrustBus202
Automated analysis of retinal imaging using machine learning techniques for computer vision
There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.
Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (âwetâ) age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the âbackâ of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.
This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.
Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success
AI-powered transmitted light microscopy for functional analysis of live cells
Transmitted light microscopy can readily visualize the morphology of living cells. Here, we introduce artificial-intelligence-powered transmitted light microscopy (AIM) for subcellular structure identification and labeling-free functional analysis of live cells. AIM provides accurate images of subcellular organelles; allows identification of cellular and functional characteristics (cell type, viability, and maturation stage); and facilitates live cell tracking and multimodality analysis of immune cells in their native form without labeling
Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study
BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways
Recommended from our members
International evaluation of an AI system for breast cancer screening.
Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful1. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7%Â and 1.2% (USA and UK) in false positives and 9.4%Â and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.Professor Fiona Gilbert receives funding from the National Institute for Health Research (Senior Investigator award)
Instrumental Perspectivism: Is AI Machine Learning Technology like NMR Spectroscopy?
The question, âWill science remain human?â expresses a worry that deep learning algorithms will replace scientists in making crucial judgments of classification and inference and that something crucial will be lost if that happens. Ever since the introduction of telescopes and microscopes humans have relied on technologies to âextendâ beyond human sensory perception in acquiring scientific knowledge. In this paper I explore whether the ways in which new learning technologies âextendâ beyond human cognitive aspects of science can be treated instrumentally. I will consider the norms for determining the reliability of a detection instrument, nuclear magnetic resonance spectroscopy, in predicting models of protein atomic structure. Do the same norms that apply in that case be used to judge the reliability of Artificial Intelligence deep learning algorithms
Clinically applicable deep learning for diagnosis and referral in retinal disease
The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting
- âŠ