27 research outputs found

    Intelligent computing applications to assist perceptual training in medical imaging

    Get PDF
    The research presented in this thesis represents a body of work which addresses issues in medical imaging, primarily as it applies to breast cancer screening and laparoscopic surgery. The concern here is how computer based methods can aid medical practitioners in these tasks. Thus, research is presented which develops both new techniques of analysing radiologists performance data and also new approaches of examining surgeons visual behaviour when they are undertaking laparoscopic training. Initially a new chest X-Ray self-assessment application is described which has been developed to assess and improve radiologists performance in detecting lung cancer. Then, in breast cancer screening, a method of identifying potential poor performance outliers at an early stage in a national self-assessment scheme is demonstrated. Additionally, a method is presented to optimize whether a radiologist, in using this scheme, has correctly localised and identified an abnormality or made an error. One issue in appropriately measuring radiological performance in breast screening is that both the size of clinical monitors used and the difficulty in linking the medical image to the observer s line of sight hinders suitable eye tracking. Consequently, a new method is presented which links these two items. Laparoscopic surgeons have similar issues to radiologists in interpreting a medical display but with the added complications of hand-eye co-ordination. Work is presented which examines whether visual search feedback of surgeons operations can be useful training aids

    Modelling the interpretation of digital mammography using high order statistics and deep machine learning

    Get PDF
    Visual search is an inhomogeneous, yet efficient sampling process accomplished by the saccades and the central (foveal) vision. Areas that attract the central vision have been studied for errors in interpretation of medical images. In this study, we extend existing visual search studies to understand features of areas that receive direct visual attention and elicit a mark by the radiologist (True and False Positive decisions) from those that elicit a mark but were captured by the peripheral vision. We also investigate if there are any differences between these areas and those that are never fixated by radiologists. Extending these investigations, we further explore the possibility of modelling radiologists’ search behavior and their interpretation of mammograms using deep machine learning techniques. We demonstrated that energy profiles of foveated (FC), peripherally fixated (PC), and never fixated (NFC) areas are distinct. It was shown that FCs are selected on the basis of being most informative. Never fixated regions were found to be least informative. Evidences that energy profiles and dwell time of these areas influence radiologists’ decisions (and confidence in such decisions) were also shown. High-order features provided additional information to the radiologists, however their effect on decision (and confidence in such decision) was not significant. We also showed that deep-convolution neural network can successfully be used to model radiologists’ attentional level, decisions and confidence in their decisions. High accuracy and high agreement (between true and predicted values) in such predictions can be achieved in modelling attentional level (accuracy: 0.90, kappa: 0.82) and decisions (accuracy: 0.92, kappa: 0.86) of radiologists. Our results indicated that an ensembled model for radiologist’s search behavior and decision can successfully be built. Convolution networks failed to model missed cancers however

    Intelligent computing applications based on eye gaze : their role in mammographic interpretation training

    Get PDF
    Early breast cancer in women is best identified through high quality mammographic screening. This is achieved by well trained health professionals and appropriate imaging. Traditionally this has used X-ray film but is rapidly changing to utilise digital imaging with the resultant mammograms visually examined on high resolution clinical workstations. These digital images can also be viewed on a range of display devices, such as standard computer monitors or PDAs. In this thesis the potential of using such non-clinical workstation display devices for training purposes in breast screening has been investigated. The research introduces and reviews breast screening both in the UK and internationally where it concentrates upon China which is beginning screening. Various imaging technologies used to examine the breast are described, concentrating upon the move from using X-ray film to digital mammograms. Training in screening in the UK is detailed and it is argued that there is a need to extend this. Initially, a national survey of all UK mammography screeners within the National Health Breast Screening Programme (NHSBSP) was undertaken. This highlighted the current main difficulties of mammographic (film) interpretation training being tied to the device for inspecting these images. The screeners perceived the need for future digital imaging training that could be outside the breast screening centre; namely 3W training (Whatever training required, Whenever and Wherever). This is largely because the clinical workstations would logistically not be available for training purposes due to the daily screening demand. Whilst these workstations must be used for screening and diagnostic purposes to allow visualisation of very small detail in the images, it is argued here that training to identify such features can be undertaken on other devices where there is not the time constraints that exist during breast screening. A series of small pilot studies were then undertaken, trialling experienced radiologists with potential displays (PDAs and laptops) for mammographic image examination. These studies demonstrated that even on a PDA small mammographic features could be identified, albeit with difficulty, even with a very limited HCI manipulation tool. For training purposes the laptop, studied here with no HCI tool, was supported. Such promising results of display acceptability led to an investigation of mammographic inspection on displays of various sizes and resolutions. This study employed radiography students, potentially eventual screeners, who were eye tracked as they examined images on various sized displays. This showed that it could be possible to use a small PDA to deliver training. A detailed study then investigated whether aspects of an expert radiologist s visual inspection behaviour could be used to develop various training approaches. Four approaches were developed and examined using naïve observers who were eye tracked as they were trained and tested. The approaches were found to be all feasible to implement but of variable usefulness for delivering mammographic interpretation training; this was confirmed by opinions from a focus group of screeners. On the basis of the previous studies, over a period of eight months, a large scale study involving 15 film readers from major breast screening centres was conducted where they examined series of digital mammograms on a clinical workstation, monitor and an iPhone. Overall results on individuals performance, image manipulation behaviour and visual search data indicated that a standard monitor could be employed successfully as an alternative for the digital workstation to deliver on-demand mammographic interpretation training using the full mammographic case images. The small iPhone, elicited poor performance, and was therefore judged not suitable for delivering training with the software employed here. However, future software developments may well overcome its shortcomings. The potential to implement training in China was examined by studying the current skill level of some practicing radiologists and an examination of how they responded to the developed training approaches. Results suggest that such an approach would be also applicable in other countries with different levels of screening skills. On-going further work is also discussed: the improvement of performance evaluation in mammography; new visual research on other breast imaging modalities and using visual search with computer aided detection to assist mammographic interpretation training.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Eye-tracking the moving medical image: Development and investigation of a novel investigational tool for CT Colonography

    Get PDF
    Colorectal cancer remains the third most common cancer in the UK but the second leading cause of cancer death with >16,000 dying per year. Many advances have been made in recent years in all areas of investigation for colorectal cancer, one of the more notable being the widespread introduction of CT Colonography (CTC). CTC has rapidly established itself as a cornerstone of diagnosis for colonic neoplasia and much work has been done to standardise and assure quality in practice in both the acquisition and interpretation of the technique. A novel feature of CTC is the presentation of imaging in both traditional 2D and the ‘virtual’ 3D endoluminal formats. This thesis looks at expanding our understanding of and improving our performance in utilizing the endoluminal 3D view. We present and develop novel metrics applicable to eye-tracking the moving image, so that the complex dynamic nature of 3D endoluminal fly-through interpretation can be captured. These metrics are then applied to assess the effect of important elements of image interpretation, namely, reader experience, the effect of the use Computer Aided Detection (CAD) and the influence of the expected prevalence of abnormality. We review our findings with reference to the literature of eye tracking within medical imaging. In the co-registration section we apply our validated computer-assisted registration algorithm to the matching of 3D endoluminal colonic locations between temporally separate datasets, assessing its accuracy as an aid to colonic polyp surveillance with CTC

    Boost your career opportunities with the ESSR diploma

    No full text

    Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging

    Get PDF
    Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks
    corecore