27 research outputs found
Fully automated 3D segmentation of dopamine transporter SPECT images using an estimation-based approach
Quantitative measures of uptake in caudate, putamen, and globus pallidus in
dopamine transporter (DaT) brain SPECT have potential as biomarkers for the
severity of Parkinson disease. Reliable quantification of uptake requires
accurate segmentation of these regions. However, segmentation is challenging in
DaT SPECT due to partial-volume effects, system noise, physiological
variability, and the small size of these regions. To address these challenges,
we propose an estimation-based approach to segmentation. This approach
estimates the posterior mean of the fractional volume occupied by caudate,
putamen, and globus pallidus within each voxel of a 3D SPECT image. The
estimate is obtained by minimizing a cost function based on the binary
cross-entropy loss between the true and estimated fractional volumes over a
population of SPECT images, where the distribution of the true fractional
volumes is obtained from magnetic resonance images from clinical populations.
The proposed method accounts for both the sources of partial-volume effects in
SPECT, namely the limited system resolution and tissue-fraction effects. The
method was implemented using an encoder-decoder network and evaluated using
realistic clinically guided SPECT simulation studies, where the ground-truth
fractional volumes were known. The method significantly outperformed all other
considered segmentation methods and yielded accurate segmentation with dice
similarity coefficients of ~ 0.80 for all regions. The method was relatively
insensitive to changes in voxel size. Further, the method was relatively robust
up to +/- 10 degrees of patient head tilt along transaxial, sagittal, and
coronal planes. Overall, the results demonstrate the efficacy of the proposed
method to yield accurate fully automated segmentation of caudate, putamen, and
globus pallidus in 3D DaT-SPECT images
Statistical Neuroimage Modeling, Processing and Synthesis based on Texture and Component Analysis: Tackling the Small Sample Size Problem
The rise of neuroimaging in the last years has provided physicians and radiologist with the ability to study the brain with unprecedented ease. This led to a new biological perspective in the study of neurodegenerative diseases, allowing the characterization of different anatomical and functional patterns associated with them. CAD systems use statistical techniques for preparing, processing and extracting information from neuroimaging data pursuing a major goal: optimize the process of analysis and diagnosis of neurodegenerative diseases and mental conditions. With this thesis we focus on three different stages of the CAD pipeline: preprocessing, feature extraction and validation. For preprocessing, we have developed a method that target a relatively recent concern: the confounding effect of false positives due to differences in the acquisition at multiple sites. Our method can effectively merge datasets while reducing the acquisition site effects. Regarding feature extraction, we have studied decomposition algorithms (independent component analysis, factor analysis), texture features and a complete framework called Spherical Brain Mapping, that reduces the 3-dimensional brain images to two-dimensional statistical maps. This allowed us to improve the performance of automatic systems for detecting Alzheimer's and Parkinson's diseases. Finally, we developed a brain simulation technique that can be used to validate new functional datasets as well as for educational purposes
4-D Tomographic Inference: Application to SPECT and MR-driven PET
Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference
Feature Extraction
Feature extraction is a procedure aimed at selecting and transforming a data set in order to increase the performance of a pattern recognition or machine learning system. Nowadays, since the amount of data available and its dimension is growing exponentially, it is a fundamental procedure to avoid overfitting and the curse of dimensionality, while, in some cases, allowing a interpretative analysis of the data. The topic itself is a thriving discipline of study, and it is difficult to address every single feature extraction algorithm. Therefore, we provide an overview of the topic, introducing widely used techniques, while at the same time presenting some domain-specific feature extraction algorithms. Finally, as a case, study, we will illustrate the vastness of the field by analysing the usage and impact of feature extraction in neuroimaging
SPECT imaging and Automatic Classification Methods in Movement Disorders
This work investigates neuroimaging as applied to movement disorders by the
use of radionuclide imaging techniques. There are two focuses in this work:
1) The optimisation of the SPECT imaging process including acquisition and
image reconstruction.
2) The development and optimisation of automated analysis techniques
The first part has included practical measurements of camera performance using
a range of phantoms. Filtered back projection and iterative methods of image
reconstruction were compared and optimised. Compensation methods for
attenuation and scatter are assessed.
Iterative methods are shown to improve image quality over filtered back
projection for a range of image quality indexes. Quantitative improvements are
shown when attenuation and scatter compensation techniques are applied, but
at the expense of increased noise.
The clinical acquisition and processing procedures were adjusted accordingly.
A large database of clinical studies was used to compare commercially available
DaTSCAN quantification software programs.
A novel automatic analysis technique was then developed by combining
Principal Component Analysis (PCA) and machine learning techniques (including
Support Vector Machines, and Naive Bayes).
The accuracy of the various classification methods under different conditions is
investigated and discussed.
The thesis concludes that the described method can allow automatic
classification of clinical images with equal or greater accuracy to that of
commercially available systems
Assessing brain functional connectivity in Parkinson’s disease using explainable Artificial Intelligence methods
Tese de Mestrado, Engenharia Biomédica e BiofÃsica, 2023, Universidade de Lisboa, Faculdade de CiênciasA doença de Parkinson (DP) é uma patologia neurogenerativa caracterizada pela perda de neurónios
dopaminérgicos, em particular nos gânglios da base, e acumulação da proteÃna α-sinucleÃna. A
DP é caracterizada por quatro sinais cardinais motores: tremores, bradicinesia, rigidez muscular e
instabilidade postural. A doença é também manifestada por sintomas não motores como perda do olfato,
doenças neuropsiquiátricas como depressão e ansiedade, e distúrbios do sono.
Esta doença progressiva não tem cura, sendo que os tratamentos procuram a melhoria da qualidade
de vida dos pacientes atenuando os sintomas. Relativamente ao diagnóstico, este é ainda principalmente
baseado na análise da apresentação clÃnica dos sintomas. Entidades como a Sociedade de Doenças
do Movimento apresentam uma série de critérios clÃnicos para aferir o diagnóstico da DP. Não
existindo qualquer exame de imagiologia ou teste analÃtico que confirme um diagnóstico, as técnicas de
neuroimagem surgem como ferramentas complementares com o fim de detetar alterações neuroquÃmicas
relacionadas com a DP. O exame imagiológico mais comum é o DatScan, um tipo de aquisição
de tomografia computorizada de emissão de fotão único que visa a deteção do transportador de
dopamina, um biomarcador da degeneração dos neurónios dopaminérgicos. Dada a precisão e confiança
insuficiente nos critérios clÃnicos de diagnóstico, bem como a falta de consistência do DaTScan, métodos
de neuroimagem alternativos têm sido considerados para averiguar alterações cerebrais funcionais
relacionadas com a DP, como por exemplo, a ressonância magnética (RM). Em particular, o fluxo
sanguÃneo cerebral e a conectividade do cérebro são analisadas através de RM funcional (RMf), uma
técnica de RM que determina a atividade cerebral, em repouso ou perante uma tarefa, através da deteção
de alterações no fluxo sanguÃneo.
Deste modo, vários estudos têm apontado como uma potencial e inovadora abordagem a utilização
de aprendizagem profunda (AP) para auxiliar e automatizar o diagnóstico de doenças neurológicas
como a doença de Parkinson, baseando em dados de neuroimagem como a RMf. Não obstante, estas
investigações ao nÃvel da DP, AP e RMf não incluem, até ao momento e à luz do nosso conhecimento,
estudos em larga escala: os números de sujeitos são ainda consideravelmente reduzidos, na ordem das
dezenas. Ademais, os modelos de AP apresentam uma natureza de "caixa negra", ou seja, não é possÃvel
aferir de que forma o algoritmo chegou às decisões que levaram à classificação efetuada. Assim, a
inteligência artificial explicável (IAE), um conjunto de métodos que pretende explicar e interpretar as
decisões tomadas por modelos de inteligência artificial, surge como uma ferramenta apropriada para
ultrapassar a falta de transparência dos modelos de AP.
Posto isto, o trabalho que surge no âmbito desta dissertação tem como objetivo o desenvolvimento
de métodos para estudar e detetar alterações ao nÃvel da conectividade funcional (CF) do cérebro
relacionadas com a DP, recorrendo a um modelo de classificação baseado na arquitetura de redes
neuronais convolucionais (RNC), e a métodos de IAE. Adicionalmente, pretende-se identificar potenciais
biomarcadores funcionais da DP.
Para este fim, utilizaram-se aquisições de RMf do conjunto de dados do PPMI, que inclui 120 scans de doentes com DP, e 22 de controlos saudáveis. Como este conjunto apresentava um desequilÃbrio
devido ao reduzido número de dados de controlos, recorreu-se ao conjunto de dados ADNI para recolher
mais 131 scans de controlos. Este ajustamento foi efetuado considerando que a diferença entre os
parâmetros de aquisição de RMf entre os dois consórcios, em particular o tempo de repetição, não leva a
alterações significativas na avaliação da CF.
Os dados de RMf foram pré-processados de acordo com uma sequência de métodos que incluÃram:
realinhamento funcional e distorção, correção temporal, identificação de outliers, segmentação e
normalização, e atenuação funcional. Foi ainda removido ruÃdo dos dados, através da regressão de
potenciais efeitos perturbadores e da aplicação de um filtro passa-banda entre os 0,008 Hz e os 0,09
Hz. Os dados foram segmentados de acordo com um atlas que inclui 14 redes neuronais de repouso.
A conetividade funcional de cada sujeito foi aferida através do cálculo das matrizes de CF, que
correspondem a matrizes de correlação entre as 14 redes funcionais de repouso. Para tal, foi aplicado o
cálculo do coeficiente de correlação de Pearson e a transformada de Fisher.
As matrizes de conetividade foram inseridas numa RNC denominada de ExtendedConnectomeCNN,
uma rede inspirada na ConnectomeCNN. Esta é composta por três camadas convolucionais e uma
camada totalmente conectada. O tamanho da janela dos filtros é de 3 por 3 e o passo igual a 2. O
número de filtros diminui ao longo das camadas convolucionais, de 256 para 128, e para 64. Em
termos de parâmetros de treino, foram selecionados um número de épocas igual a 200 e um tamanho
de grupo igual a 16. Como hiperparâmetros a otimizar, foram selecionados: a taxa de dropout, a taxa de
aprendizagem, e a presença de uma camada de normalização em lote em cada camada convolucional. O
processo de otimização dos hiperparâmetros foi efetuado através de validação cruzada com 10 folds (ou
subconjuntos). Neste processo foi utilizado o conjunto de desenvolvimento dos dados, que corresponde a
90% do conjunto total das matrizes de CF. Da otimização de hiperparâmetros, foi selecionado o conjunto
de hiperparâmetros que apresentou a melhor performance, isto é, com valores de médias das métricas
de avaliação satisfatórios e balanceados. O conjunto com melhor performance apresentava uma taxa
de dropout de 0,1 nas camadas convolucionais e de 0,4 na camada totalmente conectada, uma taxa
de aprendizagem de 0,00001, e não tinha inseridas camadas de normalização em lote. Destacamse os valores de exatidão de treino, 0,8814, de exatidão de validação, 0,7760, e de área sob a curva
de caracterÃstica de operação do receptor (AUC ROC) de 0,7496. Estes valores refletem modelos
generalizáveis que detetam tanto as classes positiva (DP) como negativa (controlo).
Foi, de seguida, desenvolvido um modelo final com os melhores hiperparâmetros, treinado no
conjunto de desenvolvimento e testado no conjunto de teste reservado à parte. Foram obtidas: uma
extaidão de treino de 0,8776, exatidão de teste de 0,8214, e uma AUC ROC de 0,8230. Logo,
o modelo construÃdo apresenta valores de performance satisfatórios e balanceados, e potencial de
interpretabilidade, o que permite a aplicação de métodos de IAE.
Ao modelo final foram aplicados três métodos de IAE: propagação de relevância camada a
camada (do inglês LRP, layer-wise relevance propagation), rede de deconvolução, e retropropagação
direcionada. Para cada método foi calculada a área de curva de perturbação do mais relevante primeiro,
ou AOPC do inglês area over the MoRF perturbation curve, que avalia o quão relevantes são as
explicações fornecidas pelos métodos de IAE. Considerando que o método LRP produziu mapas de
explicação mais especÃficos e não dispersos, e que apresentou ainda valores de AOPC maiores e melhor
distribuÃdos, considerou-se esse método como o que melhor explica a classificação de DP.
A partir das explicações fornecidas pelo método LRP foram extraÃdas as redes funcionais de repouso
que mais relevância têm na classificação de DP. Não foram identificadas quaisquer alterações referentes
à rede dos gânglios da base, apesar de tal ser esperado. No entanto, identificaram-se como potenciais biomarcadores funcionais da DP as redes de modo padrão dorsal, de modo padrão ventral, e de saliência
posterior, essencialmente envolvidas em manifestações não-motoras da doença.
Considerando que (1) o pré-processamento dos dados de RMf seguiu métodos adequados e produziu
resultados satisfatórios, (2) o modelo de RNC para classificação de DP demonstrou ser suficientemente
generalizável, com métricas de avaliação satisfatórias e equilibradas, e (3) a análise de IAE aparenta ser
fidedigna e concordante com a literatura referente às alterações de redes funcionais de repouso perante a
DP, conclui-se que a abordagem tomada para o estudo da CF relacionada com a DP utilizando métodos
de IAE foi bem sucedida. Assim, os objetivos da dissertação foram cumpridos, com a expetativa de que
este estudo resultará num progresso no desenvolvimento de técnicas inovadoras de diagnóstico de DP
assistido por métodos de inteligência artificial.Parkinson’s disease (PD) is a neurodegenerative disease characterised by dopaminergic neuron loss
and α-synuclein accumulation. It exhibits both motor symptoms (such as tremors, bradykinesia, and
rigidity) and non-motor symptoms. Diagnosis relies on clinical presentation and DaTScan, though their
reliability varies. Functional magnetic resonance imaging (fMRI) and brain connectivity analysis have
aided PD assessment. Studies have shown promise in diagnosing PD using deep learning (DL) but lack
large-scale studies and transparency due to their black-box nature. Explainable AI (XAI) aims to provide
understandable explanations for AI model decisions.
This dissertation proposes methods to assess functional connectivity in PD using a convolutional
neural network (CNN) classifier and XAI.
Resting-state fMRI scans from the PPMI and ADNI data sets were pre-processed following an atlas
composed of 14 resting-state networks. The FC matrices were computed through Pearson correlation
coefficient and Fisher transform.
The FC matrices were fed to the ExtendedConnectomeCNN, optimised through 10-fold crossvalidation, and tested, yielding a final model with 0.8214 accuracy, satisfactory performance metrics,
balanced metrics, and interpretability potential.
Three XAI methods were applied: layer-wise relevance propagation (LRP), deconvolutional network
(DeconvNet) and guided backpropagation. The LRP method provided more specific explanations,
achieving higher AOPC value. Therefore, it is the method that better explains the classification of PD.
No basal ganglia network alterations were found, but changes in dorsal and ventral default mode, and
posterior salience networks – involved in PD pathophysiology – were identified as potential biomarkers.
An attempt to perform transfer learning by training a model on the larger ABIDE set was executed.
The model presented a poor performance and was not generalising, hence, we disregarded this possibility.
The approach to assessing functional connectivity changes in PD using XAI methods was fairly
successful. The objectives of the dissertation were fulfilled, with hopes for contribution to novel PD
diagnosis techniques
Robust identification of Parkinson\u27s disease subtypes using radiomics and hybrid machine learning
OBJECTIVES: It is important to subdivide Parkinson\u27s disease (PD) into subtypes, enabling potentially earlier disease recognition and tailored treatment strategies. We aimed to identify reproducible PD subtypes robust to variations in the number of patients and features.
METHODS: We applied multiple feature-reduction and cluster-analysis methods to cross-sectional and timeless data, extracted from longitudinal datasets (years 0, 1, 2 & 4; Parkinson\u27s Progressive Marker Initiative; 885 PD/163 healthy-control visits; 35 datasets with combinations of non-imaging, conventional-imaging, and radiomics features from DAT-SPECT images). Hybrid machine-learning systems were constructed invoking 16 feature-reduction algorithms, 8 clustering algorithms, and 16 classifiers (C-index clustering evaluation used on each trajectory). We subsequently performed: i) identification of optimal subtypes, ii) multiple independent tests to assess reproducibility, iii) further confirmation by a statistical approach, iv) test of reproducibility to the size of the samples.
RESULTS: When using no radiomics features, the clusters were not robust to variations in features, whereas, utilizing radiomics information enabled consistent generation of clusters through ensemble analysis of trajectories. We arrived at 3 distinct subtypes, confirmed using the training and testing process of k-means, as well as Hotelling\u27s T2 test. The 3 identified PD subtypes were 1) mild; 2) intermediate; and 3) severe, especially in terms of dopaminergic deficit (imaging), with some escalating motor and non-motor manifestations.
CONCLUSION: Appropriate hybrid systems and independent statistical tests enable robust identification of 3 distinct PD subtypes. This was assisted by utilizing radiomics features from SPECT images (segmented using MRI). The PD subtypes provided were robust to the number of the subjects, and features
(I123)FP-CIT reporting: Machine Learning, Effectiveness and Clinical Integration
(I123)FP-CIT imaging is used for differential diagnosis of clinically uncertain Parkinsonian Syndromes. Conventional reporting relies on visual interpretation of images and analysis of semi-quantification results. However, this form of reporting is associated with variable diagnostic accuracy results. The first half of this thesis clarifies whether machine learning classification algorithms, used as computer aided diagnosis (CADx) tool, can offer improved performance.
Candidate machine learning classification algorithms were developed and compared to a range of semi-quantitative methods, which showed the superiority of machine learning tools in terms of binary classification performance. The best of the machine learning algorithms, based on 5 principal components and a linear Support Vector Machine classifier, was then integrated into clinical software for a reporting exercise (pilot and main study).
Results demonstrated that the CADx software had a consistently high standalone accuracy. In general, CADx caused reporters to give more consistent decisions and resulted in improved diagnostic accuracy when viewing images with unfamiliar appearances.
However, although these results were undoubtedly impressive, it was also clear that a number of additional, significant hurdles remained, that needed to be overcome before widespread clinical adoption could be achieved.
Consequently, the second half of this thesis focuses on addressing one particular aspect of the remaining translation gap for (I123)FP-CIT classification software, namely heterogeneity of the clinical environment. Introduction of new technology, such as machine learning, may require new metrics, which in this work were informed through novel methods (such as the use of innovative phantoms) and strategies, enabling sensitivity testing to be developed, applied and evaluated.
The pathway to acceptance of novel and progressive technology in the clinic is a tortuous one, and this thesis emphasises the importance of many factors in addition to the core technology that need to be addressed if such tools are ever to achieve clinical adoption
The nuclear medicine technologist will see you now
Background: It has been estimated that an additional 3500 radiographers alone are needed over the next 5 years. Assistant Practitioners, Advanced Practitioners and Radiologists equals further 2500 positions. A major expansion in the imaging workforce is a must to fulfil the increasing demand for radiology services. Recruitment within existing radiology workforce and training in Nuclear Medicine had proven insufficient. Development of Apprenticeship for Nuclear Medicine degree at Cumbria University was essential. Registration with The Academy for Healthcare Science (AHCS) was guaranteed upon completion.
Methods used: Data analysis from the first University intake in 2017 through 2018, 2019 and the very challenging 2020 cohort of apprentices.
Assessment of the recruitment process including candidate background, experience and education.
Students’ journey and feedback from their degree level 6 studies.
Data for the number of graduating students across cohorts.
Retention data of newly qualified professionals in training departments.
Summary: Recruiting candidates internally, ensuring they have a healthcare experience, facilitate retention post qualification.
Fulfilment of University requirements regarding UCAS points proves to be a valuable tool to ensure studies completion.
UHS alone managed to recruit four candidates. Two already qualified with 1st hons degree and working at band 5 level and the other two are determined to progress within the profession upon graduation.
Conclusion: It had been proved that candidates with prior healthcare experience are more likely to successfully complete studies. They perform well within the role and progress guaranteeing retention. Structured training with university input ensured highly qualified workforce registered with AHCS