233 research outputs found

    State of the art: Eye-tracking studies in medical imaging

    Get PDF
    Eye-tracking – the process of measuring where people look in a visual field – has been widely used to study how humans process visual information. In medical imaging, eye-tracking has become a popular technique in many applications to reveal how visual search and recognition tasks are performed, providing information that can improve human performance. In this paper, we present a comprehensive review of eye-tracking studies conducted with medical images and videos for diverse research purposes, including identification of degree of expertise, development of training, and understanding and modelling of visual search patterns. In addition, we present our recent eye-tracking study that involves a large number of screening mammograms viewed by experienced breast radiologists. Based on the eye-tracking data, we evaluate the plausibility of predicting visual attention by computational models

    Intelligent computing applications to assist perceptual training in medical imaging

    Get PDF
    The research presented in this thesis represents a body of work which addresses issues in medical imaging, primarily as it applies to breast cancer screening and laparoscopic surgery. The concern here is how computer based methods can aid medical practitioners in these tasks. Thus, research is presented which develops both new techniques of analysing radiologists performance data and also new approaches of examining surgeons visual behaviour when they are undertaking laparoscopic training. Initially a new chest X-Ray self-assessment application is described which has been developed to assess and improve radiologists performance in detecting lung cancer. Then, in breast cancer screening, a method of identifying potential poor performance outliers at an early stage in a national self-assessment scheme is demonstrated. Additionally, a method is presented to optimize whether a radiologist, in using this scheme, has correctly localised and identified an abnormality or made an error. One issue in appropriately measuring radiological performance in breast screening is that both the size of clinical monitors used and the difficulty in linking the medical image to the observer s line of sight hinders suitable eye tracking. Consequently, a new method is presented which links these two items. Laparoscopic surgeons have similar issues to radiologists in interpreting a medical display but with the added complications of hand-eye co-ordination. Work is presented which examines whether visual search feedback of surgeons operations can be useful training aids

    Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging

    Get PDF
    Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks

    Modelling the interpretation of digital mammography using high order statistics and deep machine learning

    Get PDF
    Visual search is an inhomogeneous, yet efficient sampling process accomplished by the saccades and the central (foveal) vision. Areas that attract the central vision have been studied for errors in interpretation of medical images. In this study, we extend existing visual search studies to understand features of areas that receive direct visual attention and elicit a mark by the radiologist (True and False Positive decisions) from those that elicit a mark but were captured by the peripheral vision. We also investigate if there are any differences between these areas and those that are never fixated by radiologists. Extending these investigations, we further explore the possibility of modelling radiologists’ search behavior and their interpretation of mammograms using deep machine learning techniques. We demonstrated that energy profiles of foveated (FC), peripherally fixated (PC), and never fixated (NFC) areas are distinct. It was shown that FCs are selected on the basis of being most informative. Never fixated regions were found to be least informative. Evidences that energy profiles and dwell time of these areas influence radiologists’ decisions (and confidence in such decisions) were also shown. High-order features provided additional information to the radiologists, however their effect on decision (and confidence in such decision) was not significant. We also showed that deep-convolution neural network can successfully be used to model radiologists’ attentional level, decisions and confidence in their decisions. High accuracy and high agreement (between true and predicted values) in such predictions can be achieved in modelling attentional level (accuracy: 0.90, kappa: 0.82) and decisions (accuracy: 0.92, kappa: 0.86) of radiologists. Our results indicated that an ensembled model for radiologist’s search behavior and decision can successfully be built. Convolution networks failed to model missed cancers however

    New approaches to the analysis of eye movement behaviour across expertise while viewing brain MRIs

    Get PDF
    Abstract Brain tumour detection and diagnosis requires clinicians to inspect and analyse brain magnetic resonance images. Eye-tracking is commonly used to examine observers’ gaze behaviour during such medical image interpretation tasks, but analysis of eye movement sequences is limited. We therefore used ScanMatch, a novel technique that compares saccadic eye movement sequences, to examine the effect of expertise and diagnosis on the similarity of scanning patterns. Diagnostic accuracy was also recorded. Thirty-five participants were classified as Novices, Medics and Experts based on their level of expertise. Participants completed two brain tumour detection tasks. The first was a whole-brain task, which consisted of 60 consecutively presented slices from one patient; the second was an independent-slice detection task, which consisted of 32 independent slices from five different patients. Experts displayed the highest accuracy and sensitivity followed by Medics and then Novices in the independent-slice task. Experts showed the highest level of scanning pattern similarity, with medics engaging in the least similar scanning patterns, for both the whole-brain and independent-slice task. In the independent-slice task, scanning patterns were the least similar for false negatives across all expertise levels and most similar for experts when they responded correctly. These results demonstrate the value of using ScanMatch in the medical image perception literature. Future research adopting this tool could, for example, identify cases that yield low scanning similarity and so provide insight into why diagnostic errors occur and ultimately help in training radiologists

    Development and Validation of Mechatronic Systems for Image-Guided Needle Interventions and Point-of-Care Breast Cancer Screening with Ultrasound (2D and 3D) and Positron Emission Mammography

    Get PDF
    The successful intervention of breast cancer relies on effective early detection and definitive diagnosis. While conventional screening mammography has substantially reduced breast cancer-related mortalities, substantial challenges persist in women with dense breasts. Additionally, complex interrelated risk factors and healthcare disparities contribute to breast cancer-related inequities, which restrict accessibility, impose cost constraints, and reduce inclusivity to high-quality healthcare. These limitations predominantly stem from the inadequate sensitivity and clinical utility of currently available approaches in increased-risk populations, including those with dense breasts, underserved and vulnerable populations. This PhD dissertation aims to describe the development and validation of alternative, cost-effective, robust, and high-resolution systems for point-of-care (POC) breast cancer screening and image-guided needle interventions. Specifically, 2D and 3D ultrasound (US) and positron emission mammography (PEM) were employed to improve detection, independent of breast density, in conjunction with mechatronic and automated approaches for accurate image acquisition and precise interventional workflow. First, a mechatronic guidance system for US-guided biopsy under high-resolution PEM localization was developed to improve spatial sampling of early-stage breast cancers. Validation and phantom studies showed accurate needle positioning and 3D spatial sampling under simulated PEM localization. Subsequently, a whole-breast spatially-tracked 3DUS system for point-of-care screening was developed, optimized, and validated within a clinically-relevant workspace and healthy volunteer studies. To improve robust image acquisition and adaptability to diverse patient populations, an alternative, cost-effective, portable, and patient-dedicated 3D automated breast (AB) US system for point-of-care screening was developed. Validation showed accurate geometric reconstruction, feasible clinical workflow, and proof-of-concept utility across healthy volunteers and acquisition conditions. Lastly, an orthogonal acquisition and 3D complementary breast (CB) US generation approach were described and experimentally validated to improve spatial resolution uniformity by recovering poor out-of-plane resolution. These systems developed and described throughout this dissertation show promise as alternative, cost-effective, robust, and high-resolution approaches for improving early detection and definitive diagnosis. Consequently, these contributions may advance breast cancer-related equities and improve outcomes in increased-risk populations and limited-resource settings

    Doctor of Philosophy

    Get PDF
    dissertationUsing eye-tracking technology to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (training condition: ABCDE vs. Ugly Duckling Sign) Ã- 2 (visual condition: photorealistic images vs. illustrations) factorial design to assess whether SSE training succeeds or fails in facilitating increases in sensitivity and specificity. Self-efficacy and perceived importance were tested as moderators, and eye-tracking fixation metrics as mediators, within the framework of Visual Skill Acquisition Theory (VSAT). For sensitivity, results indicated a significant main effect for visual condition, F(1,88) = 7.102, p = .009, wherein illustrations (M = .524, SD = .197) resulted in greater sensitivity than photos (M = .425, SD = .159, d = .55). For specificity, the main effect for training was not significant, F(1,88) = 2.120, p = .149; however, results indicated a significant main effect for visual condition, F(1,88) = 4.079, p = .046, wherein photos (M = .821, SD = .108) resulted in greater specificity than illustrations (M = .770, SD = .137, d = .41). The interaction for training Ã- visual condition, F(1,88) = 3.554, p = .063, was significant within a 90% confidence interval, such that those within the UDS Photo condition displayed greater specificity than all other combinations of training and visual condition. No significant moderated mediation manifested for sensitivity, but for specificity, the model was significant, r = .59, R2 = .34, F(9,82) = 4.7783, p =.001, with Percent of Time in Lookzone serving as a significant mediator, and both self-efficacy and visual condition significantly moderating the mediation. For those in the photo condition with very high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect through Percent of Time in Lookzoneâ€"which is to say that these individuals spent a larger amount of their viewing time on target (observing the atypical nevi)â€"and time on target is positively related to specificity. Findings suggest that existing SSE training techniques may be enhanced by maximizing visual processing efficiency
    • …
    corecore