84 research outputs found
Modelling the interpretation of digital mammography using high order statistics and deep machine learning
Visual search is an inhomogeneous, yet efficient sampling process accomplished by the saccades and the central (foveal) vision. Areas that attract the central vision have been studied for errors in interpretation of medical images. In this study, we extend existing visual search studies to understand features of areas that receive direct visual attention and elicit a mark by the radiologist (True and False Positive decisions) from those that elicit a mark but were captured by the peripheral vision. We also investigate if there are any differences between these areas and those that are never fixated by radiologists. Extending these investigations, we further explore the possibility of modelling radiologists’ search behavior and their interpretation of mammograms using deep machine learning techniques. We demonstrated that energy profiles of foveated (FC), peripherally fixated (PC), and never fixated (NFC) areas are distinct. It was shown that FCs are selected on the basis of being most informative. Never fixated regions were found to be least informative. Evidences that energy profiles and dwell time of these areas influence radiologists’ decisions (and confidence in such decisions) were also shown. High-order features provided additional information to the radiologists, however their effect on decision (and confidence in such decision) was not significant. We also showed that deep-convolution neural network can successfully be used to model radiologists’ attentional level, decisions and confidence in their decisions. High accuracy and high agreement (between true and predicted values) in such predictions can be achieved in modelling attentional level (accuracy: 0.90, kappa: 0.82) and decisions (accuracy: 0.92, kappa: 0.86) of radiologists. Our results indicated that an ensembled model for radiologist’s search behavior and decision can successfully be built. Convolution networks failed to model missed cancers however
Medical image retrieval for augmenting diagnostic radiology
Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a
tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images.
We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek.
For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule
out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images
Enhanced algorithms for lesion detection and recognition in ultrasound breast images
Mammography is the gold standard for breast cancer detection. However, it has very
high false positive rates and is based on ionizing radiation. This has led to interest in
using multi-modal approaches. One modality is diagnostic ultrasound, which is based
on non-ionizing radiation and picks up many of the cancers that are generally missed
by mammography. However, the presence of speckle noise in ultrasound images has a
negative effect on image interpretation. Noise reduction, inconsistencies in capture
and segmentation of lesions still remain challenging open research problems in
ultrasound images.
The target of the proposed research is to enhance the state-of-art computer vision
algorithms used in ultrasound imaging and to investigate the role of computer
processed images in human diagnostic performance. [Continues.
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
Immersive analytics for oncology patient cohorts
This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness
Translation of quantitative MRI analysis tools for clinical neuroradiology application
Quantification of imaging features can assist radiologists by reducing subjectivity, aiding detection of subtle pathology, and increasing reporting consistency. Translation of quantitative image analysis techniques to clinical use is currently uncommon and challenging. This thesis explores translation of quantitative imaging support tools for clinical neuroradiology use. I have proposed a translational framework for development of quantitative imaging tools, using dementia as an exemplar application. This framework emphasises the importance of clinical validation, which is not currently prioritised. Aspects of the framework were then applied to four disease areas: hippocampal sclerosis (HS) as a cause of epilepsy; dementia; multiple sclerosis (MS) and gliomas. A clinical validation study for an HS quantitative report showed that when image interpreters used the report, they were more accurate and confident in their assessments, particularly for challenging bilateral cases. A similar clinical validation study for a dementia reporting tool found improved sensitivity for all image interpreters and increased assessment accuracy for consultant radiologists. These studies indicated benefits from quantitative reports that contextualise a patient’s results with appropriate normative reference data. For MS, I addressed a technical translational challenge by applying lesion and brain quantification tools to standard clinical image acquisitions which do not include a conventional T1-weighted sequence. Results were consistent with those from conventional sequence inputs and therefore I pursued this concept to establish a clinically applicable normative reference dataset for development of a quantitative reporting tool for clinical use. I focused on current radiology reporting of gliomas to establish which features are commonly missed and may be important for clinical management decisions. This informs both the potential utility of a quantitative report for gliomas and its design and content. I have identified numerous translational challenges for quantitative reporting and explored aspects of how to address these for several applications across clinical neuroradiology
Recommended from our members
Inclusive health: Medtech innovations for the early detection of cancer in India
This interdisciplinary study focuses on understanding the advances in conceptualising inclusive health innovations in low-resource healthcare settings. To this end, this research responds to two main gaps. First, an empirical gap exists to conceptualise inclusiveness in high-technology health innovations in low-resource healthcare settings. Second, no theoretical framework enables studying technical change in the health sector by linking unmet needs with industrial and health systems. In this research, I propose a novel Inclusive Health Innovation (IHI) framework, integrating and extending the sectoral system of innovation approach (Malerba & Mani, 2009) and the qualitative heuristics of the institutional triad of healthcare (Srinivas, 2012). This research employs the IHI framework to conceptualise inclusive innovations using cases of Medtech innovations for the early detection of cancer offered by startups in India. It identifies and investigates the actors and factors influencing various stages of the innovation process, including development, diffusion, and adoption. This research uses qualitative methods, comprising both primary and secondary data, for a landscape study and four case studies of point-of-care MedTech innovations for early detection of breast, oral, and cervical cancer in India. The research finds that MedTech innovations are driving inclusiveness in the early detection of cancer, both in process and outcomes, in low-resource healthcare settings. The analysis reveals a strong alignment of STI policy with industrial and health policies in the form of a robust MedTech ecosystem to support the development of these innovations. As regards diffusion, this thesis pinpoints that startup firms choose various business models, partnerships, and stakeholder interactions to create new markets and generate demand for the early detection of cancer. These are 'pocket wins' in increasing the availability of locally relevant solutions for cancer screening and early diagnosis. The last-mile adoption of these innovations in the healthcare delivery system hinges upon stronger policy alignment and regulatory changes in the health and industrial sector. The thesis contributes a novel theoretical framework and original analysis of rich empirical case studies. The thesis further contributes to observable characteristics of inclusive health innovations in the early detection of cancer in India. The research findings are relevant for designing targeted policy instruments for (i) cancer screening and early diagnosis using high-technology solutions in low-resource healthcare settings, (ii) digital infrastructure and regulations to support the adoption of innovations in the public healthcare system and (iii) data privacy and security for Medtech based on AI and ML
Instruction with 3D Computer Generated Anatomy
Research objectives. 1) To create an original and useful software application; 2) to
investigate the utility of dyna-linking for teaching upper limb anatomy. Dyna-linking
is an arrangement whereby interaction with one representation automatically drives the
behaviour of another representation.
Method. An iterative user-centred software development methodology was used to build,
test and refine successive prototypes of an upper limb software tutorial. A randomised
trial then tested the null hypothesis: There will be no significant difference in learning
outcomes between participants using dyna-linked 2D and 3D representations of the upper
limb and those using non dyna-linked representations. Data was analysed in SPSS using
factorial analysis of variance (ANOVA).
Results and analysis. The study failed to reject the null hypothesis as there was no
signi cant di fference between experimental conditions. Post-hoc analysis revealed that
participants with low prior knowledge performed significantly better (p = 0.036) without
dyna-linking (mean gain = 7.45) than with dyna-linking (mean gain = 4.58). Participants with high prior knowledge performed equally well with or without dyna-linking.
These findings reveal an aptitude by treatment interaction (ATI) whereby the effectiveness of dyna-linking varies according to learner ability. On average, participants using
the non dyna-linked system spent 3 minutes and 4 seconds longer studying the tutorial.
Participants using the non dyna-linked system clicked 30% more on the representations.
Dyna-linking had a high perceived value in questionnaire surveys (n=48) and a focus
group (n=7).
Conclusion. Dyna-linking has a high perceived value but may actually over-automate
learning by prematurely giving novice learners a fully worked solution. Further research
is required to confirm if this finding is repeated in other domains, with different learners
and more sophisticated implementations of dyna-linking
- …