169 research outputs found
Computational intelligence margin models for radiotherapeutic cancer treatment
The derivation of margins for use in external beam radiotherapy involves a complex balance between ensuring adequate tumour dose coverage that will lead to cure of the cancer whilst sufficiently sparing the surrounding organs at risk (OARs). The treatment of cancer using ionising radiation is currently witnessing unprecedented levels of new treatment techniques and equipment being introduced. These new treatment strategies, with improved imaging during treatment, are aimed at improved radiation dose conformity to dynamic targets and better sparing of the healthy tissues. However, with the adoption of these new techniques for radiotherapy, the validity of the continued use of recommended statistical model based margin formulations to calculate the treatment margins is now being questioned more than ever before. To derive margins for use in treatment planning which address present shortcomings, this study utilised novel applications of fuzzy logic and neural network techniques to the PTV margin problem. As an extension of this work a new hybrid fuzzy network technique was also adopted for use in margin derivation, a novel application of this technique which required new rule formulations and rule base manipulations. The new margin models developed in this study utilised a novel combination of the radiotherapy errors and their radiobiological effects which was previously difficult to establish using mathematical methods. This was achieved using fuzzy rules and neural network input layers. An advantage of the neural network procedure was that fewer computational steps were needed to calculate the final result whereas the fuzzy based techniques required a significant number of iterative computational steps including the definition of the fuzzy rules and membership functions prior to computation of the final result. An advantage of the fuzzy techniques was their ability to use fewer data points to deduce the relationship between the output and input parameters. In contrast the neural network model requires a large amount of training data. The previously stated limitations of currently recommended statistical techniques were addressed by application of the fuzzy and neural network models. A major advantage of the computational intelligence methods from this study is that they allow the calculation of patient-specific margins. Radiotherapy planning currently relies on the use of ‘one size fits all’ class solutions for margins for each tumour site and with the large variability in patient physiology these margins may not be suitable for use in some cases. The models from this study can be applied to other treatment sites, including brain, lung and gastric tumours.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing
neighboring organs at risk (OARs). Increasingly complex treatment techniques
such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery
(SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been
developed to deliver doses more precisely to the target. While such
technologies have improved dose delivery, the implementation of intra-fraction
motion management to verify tumor position at the time of treatment has become
increasingly relevant. Recently, artificial intelligence (AI) has demonstrated
great potential for real-time tracking of tumors during treatment. However,
AI-based motion management faces several challenges including bias in training
data, poor transparency, difficult data collection, complex workflows and
quality assurance, and limited sample sizes. This review serves to present the
AI algorithms used for chest, abdomen, and pelvic tumor motion
management/tracking for radiotherapy and provide a literature summary on the
topic. We will also discuss the limitations of these algorithms and propose
potential improvements.Comment: 36 pages, 5 Figures, 4 Table
A systematic review of the applications of Expert Systems (ES) and machine learning (ML) in clinical urology.
BackgroundTesting a hypothesis for 'factors-outcome effect' is a common quest, but standard statistical regression analysis tools are rendered ineffective by data contaminated with too many noisy variables. Expert Systems (ES) can provide an alternative methodology in analysing data to identify variables with the highest correlation to the outcome. By applying their effective machine learning (ML) abilities, significant research time and costs can be saved. The study aims to systematically review the applications of ES in urological research and their methodological models for effective multi-variate analysis. Their domains, development and validity will be identified.MethodsThe PRISMA methodology was applied to formulate an effective method for data gathering and analysis. This study search included seven most relevant information sources: WEB OF SCIENCE, EMBASE, BIOSIS CITATION INDEX, SCOPUS, PUBMED, Google Scholar and MEDLINE. Eligible articles were included if they applied one of the known ML models for a clear urological research question involving multivariate analysis. Only articles with pertinent research methods in ES models were included. The analysed data included the system model, applications, input/output variables, target user, validation, and outcomes. Both ML models and the variable analysis were comparatively reported for each system.ResultsThe search identified n = 1087 articles from all databases and n = 712 were eligible for examination against inclusion criteria. A total of 168 systems were finally included and systematically analysed demonstrating a recent increase in uptake of ES in academic urology in particular artificial neural networks with 31 systems. Most of the systems were applied in urological oncology (prostate cancer = 15, bladder cancer = 13) where diagnostic, prognostic and survival predictor markers were investigated. Due to the heterogeneity of models and their statistical tests, a meta-analysis was not feasible.ConclusionES utility offers an effective ML potential and their applications in research have demonstrated a valid model for multi-variate analysis. The complexity of their development can challenge their uptake in urological clinics whilst the limitation of the statistical tools in this domain has created a gap for further research studies. Integration of computer scientists in academic units has promoted the use of ES in clinical urological research
Optimising outcomes for potentially resectable pancreatic cancer through personalised predictive medicine : the application of complexity theory to probabilistic statistical modeling
Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy.
This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes.
This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research.
Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models.Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy.
This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes.
This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research.
Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models
Estimating current and future demands for stereotactic ablative body radiotherapy (SABR) in the Australian lung cancer population
Stereotactic ablative body radiotherapy (SABR) is the current standard of care for inoperable
early-stage non-small cell lung carcinoma (NSCLC). It is a curative treatment option that offers
excellent survival rates through non-invasive out-patient visits. SABR can be offered to frail,
elderly patients and those with comorbidities or poor performance status, who may be ineligible
for surgery or radical radiotherapy and would otherwise be referred to palliative treatments or
(sometimes) left untreated. While strong evidence from randomised trials have supported
SABR use for peripherally located tumours (>2cm from the proximal bronchial tree (PBT),
treatment of central tumours with SABR remains controversial due to increased risks of severe
toxicities.
Determining the total demand for lung SABR, also known as the optimal rate of utilisation, is
an important step in ensuring adequate and efficient provision of radiotherapy services. Once
established, it can used as a benchmark against which actual SABR utilisation rates can be
compared and any shortfalls in service provision identified. This optimal SABR utilisation rate
can be calculated using an evidence-based approach involving first identifying all
indications/clinical situations for which lung SABR is a guideline-recommended treatment,
then obtaining data on the proportion of each indication within the lung cancer population. This,
however, has so far been hindered by lack of published data on the proportions of peripheral
versus centrally located lung tumours.
The difficulty in determining the distribution of central and peripheral tumours is related to how
these tumours are distinguished in clinical practice; based on clinicians’ manual delineations
(i.e. contours) of the PBT. Manual contouring is a well-known source of uncertainty caused by
inter- and intra-observer variabilities. Such uncertainties preclude relying on retrospective
records of patients (assessed by multiple clinicians) to establish reliable estimates of the
proportions of central and peripheral tumours.
To overcome this, a novel, fully automatic tool for PBT contouring and measuring distance to
the tumour was developed as part of this thesis. The tool relies on an intensity-based algorithm
that detects bronchus airways based on pre-determined Hounsfield Unit thresholds. Manual
PBT contours generated by different clinicians were used to assess inter-observer variabilities11
and to assess the accuracy of automatically generated contours. Results from this investigation
have validated the tool’s ability to generate contours within the accuracy experts-generated ones
without the need for manual intervention.
Subsequently, this tool was applied on a retrospective dataset (N=234) of Stage I and II NSCLC
patients treated with radiotherapy at Liverpool and Macarthur Cancer Therapy Centre in
Sydney, Australia. This allowed for patients’ tumour centrality to be assessed efficiently and,
more importantly, with less influence from observer variabilities. The tool successfully
generated PBT contours and measured the minimum distance to the tumour for all patients
within the obtained dataset. Patients were then stratified based on the tumour proximity to the
PBT, allowing the distribution of peripheral and central tumours to be determined. Previous
studies reporting this distribution have relied on manual PBT contours, which are largely
affected by observer variabilities as shown in this work.
To calculate the total demand for lung SABR, epidemiological data on the proportions of all
clinical attributes where SABR is recommended (including the proportion of peripheral versus
central tumours) were incorporated into an evidence-based optimal utilisation model developed
as part of this work. Based on most recent evidence and guidelines, it was estimated that a total
of 6% of all new patients diagnosed with lung cancer in Australia will require SABR at least
once during the course of their illness. In those with early-stage NSCLC, this rate was estimated
to be at 24%. This is the first report of evidence-based optimal rates of lung SABR utilisation.
The utilisation model can be easily modified and updated with new data to ensure accurate and
up-to-date estimates of lung SABR demands within the population.
Finally, this work also provided an investigation into the potential impact of upcoming
technologies on future demands for lung SABR. Magnetic resonance imaging (MRI) guidance,
for example, promises to significantly improve treatment accuracy and transform how
radiotherapy is delivered. A planning study was conducted to simulate the dosimetric gains
expected by such technologies, in particular, the potential reductions in planning safety
margins. Results from this study indicated the potential for such technologies to extend SABR
treatments to a substantial proportion of patients currently deemed too high-risk to receive it.
As such, it is expected that the demand for lung SABR may increase in the near future as such
technologies become more widely availabl
Mammography
In this volume, the topics are constructed from a variety of contents: the bases of mammography systems, optimization of screening mammography with reference to evidence-based research, new technologies of image acquisition and its surrounding systems, and case reports with reference to up-to-date multimodality images of breast cancer. Mammography has been lagged in the transition to digital imaging systems because of the necessity of high resolution for diagnosis. However, in the past ten years, technical improvement has resolved the difficulties and boosted new diagnostic systems. We hope that the reader will learn the essentials of mammography and will be forward-looking for the new technologies. We want to express our sincere gratitude and appreciation?to all the co-authors who have contributed their work to this volume
Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning
Mención Internacional en el tÃtulo de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.)
that produces pulmonary damage due to its airborne nature. This fact facilitates the disease
fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused
1.2 million deaths and 9.9 million new cases.
Traditionally, TB has been considered a binary disease (latent/active) due to the limited
specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the
longitudinal assessment of pulmonary affectation needed for the development of novel drugs
and to control the spread of the disease.
Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations
of TB that are undetectable using regular diagnostic tests, which suffer from
limited specificity. In conventional workflows, expert radiologists inspect the CT images.
However, this procedure is unfeasible to process the thousands of volume images belonging
to the different TB animal models and humans required for a suitable (pre-)clinical trial.
To achieve suitable results, automatization of different image analysis processes is a
must to quantify TB. It is also advisable to measure the uncertainty associated with this
process and model causal relationships between the specific mechanisms that characterize
each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel
methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV).
Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing
an unsupervised rule-based model which was traditionally considered a needed
step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection
model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD,
8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected
by respiratory movement artefacts.
Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm
is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the
segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our
automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions
and the characterization of disease progression. To this aim, the method employs the
Statistical Region Merging algorithm to detect lesions subsequently characterized by texture
features that feed a Random Forest (RF) estimator. The proposed procedure enables a
selection of a simple but powerful model able to classify abnormal tissue.
The latest works base their methodology on Deep Learning (DL). Chapter 4 extends
the classification of TB lesions. Namely, we introduce a computational model to infer
TB manifestations present in each lung lobe of CT scans by employing the associated
radiologist reports as ground truth. We do so instead of using the classical manually delimited
segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask
classification context in which loss function is weighted by homoscedastic uncertainty.
Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization.
Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules
and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations,
consolidations, trees in bud) when considering the whole lung.
In Chapter 5, we present a DL model capable of extracting disentangled information from
images of different animal models, as well as information of the mechanisms that generate
the CT volumes. The method provides the segmentation mask of axial slices from three
animal models of different species employing a single trained architecture. It also infers the
level of TB damage and generates counterfactual images. So, with this methodology, we
offer an alternative to promote generalization and explainable AI models.
To sum up, the thesis presents a collection of valuable tools to automate the quantification
of pathological lungs and moreover extend the methodology to provide more explainable
results which are vital for drug development purposes. Chapter 6 elaborates on these
conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: MarÃa Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre
- …