3 research outputs found

    Evaluating the Microsoft Kinect compared to the mouse as an effective interaction device for medical imaging manipulations

    Get PDF
    Volume-rendered medical images afford medical professionals increased information to provide their patients with more advanced diagnoses than previously allowed with 2D slices. Three-dimensional (3D) images enable a non-invasive depiction of a patient\u27s body, which a surgeon would expect to see during an invasive surgery. These generated 3D representations can more effectively and efficiently convey information about the patient to the surgeon, bypassing the mental reconstruction required by radiologists to interpret the same patient\u27s data displaced on a two-dimensional (2D) array of images. Time demands on doctors prohibit mastering complicated software packages with steep learning curves. Designs of medical imaging software must be easy to learn with effective functionality for the software to be used and accessible to medical professionals. Interacting with the software is a key component of usability and accessibility. Commercially-off-the-shelf (COTS) interaction devices provide new opportunities to manipulate 3D medical imaging software to further reduce a traditionally steep learning curve in medical imaging software. Implementing these devices into medical environments can create new concerns with sterilization and effective utilization. Specific COTS devices offer sterile, touch-less interaction that would be ideal for medical operating rooms (OR), anatomy labs or clinics. These devices allow medical professionals direct control of the patient\u27s data being examined. This thesis explores the usability and functionality of the Microsoft KinectTM as an interaction device for medical imaging technology by being able to complete a task called windowing or changing the tissue densities displayed in an anatomical region. A user study was conducted to evaluate participant\u27s performance and experience, while completing a task called windowing. Windowing is changing the tissue densities displayed in an anatomical image. Participants completed four rounds of five tasks to view particular anatomical features throughout two datasets. Participants using both devices had a 75% accuracy to correctly identify the anatomy, while those using the Kinect (ฮผ = 9.739 minutes) spent on average 2-minutes less time to complete the series of 20 tasks, compared to those using the mouse (ฮผ = 11.709 minutes). Participants using the Kinect also had larger window width values than mouse users, however this did not appear to affect their accuracy in identifying the tasks

    3์ฐจ์› ์˜๋ฃŒ ์˜์ƒ ํŒ๋… ์‹œ์„  ์ •๋ณด์˜ ๋Œ€ํ™”ํ˜• ์‹œ๊ฐ์  ๋ถ„์„ ํ”„๋ ˆ์ž„์›Œํฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ์„œ์ง„์šฑ.We propose an interactive visual analytics framework for diagnostic gaze data on volumetric medical images. The framework is designed to compare gaze data from multiple readers with effective visualizations, which are tailored for volumetric gaze data with additional contextual information. Gaze pattern comparison is essential to understand how radiologists examine medical images and to identify factors influencing the examination. However, prior work on diagnostic gaze data using the medical images acquired from volumetric imaging systems (e.g., computed tomography or magnetic resonance imaging) showed a number of limitations in comparative analysis. In the diagnosis, radiologists scroll through a stack of images to get a 3D cognition of organs and lesions that resulting gaze patterns contain additional depth information compared to the gaze tracking study with 2D stimuli. As a result, the additional spatial dimension aggravated the complexity on visual representation of gaze data. A recent work proposed a visualization design based on direct volume rendering (DVR) for gaze patterns in volumetric imageshowever, effective and comprehensive gaze pattern comparison is still challenging due to lack of interactive visualization tools for comparative gaze analysis. In this paper, we first present an effective visual representation, and propose an interactive analytics framework for multiple volumetric gaze data. We also take the challenge integrating crucial contextual information such as pupil size and windowing (i.e., adjusting brightness and contrast of image) into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components, a context-embedded interactive scatterplot (CIS) is especially designed to help users to examine abstract gaze data in diverse contexts by embedding medical imaging representations well-known to radiologists in it. We also present the results from case studies with chest and abdominal radiologistsChapter 1 Introduction 1 1.1 Background 1 1.2 Research Components 5 1.3 Radiological Practice 6 1.4 Organization of the Dissertation 8 Chapter 2 Related Work 9 2.1 Visualization Combining 2D and 3D 9 2.2 Eye Tracking Data Visualization 14 2.3 Comparative Data Analysis 16 2.4 Gaze Analysis in the Medical field 18 Chapter 3 GazeVis: Volumetric Gaze Data 21 3.1 Visualization of Stimuli and Gaze Data 23 3.1.1 Computation of Gaze Field 26 3.1.2 Visualization of Gaze Field 29 3.1.3 Gaze Field for Interactive Information Seeking 30 3.2 Interactions and Dynamic Queries 32 3.2.1 Interaction Design 32 3.2.2 Spatial Filtering 34 3.2.3 Temporal Filtering 34 3.2.4 Transfer Function Control 36 3.2.5 Gaussian Blur Control 38 3.3 Implementation 38 3.4 Evaluation with Radiologists 38 3.4.1 Case Study Protocol 39 3.4.2 Datasets 41 3.4.3 Apparatus 42 3.4.4 Chest Radiologists 42 3.4.5 Abdominal Radiologists 45 3.5 Discussion 49 3.5.1 Spatial Data Structure and Flexibility 49 3.5.2 Interacting with Contextual Data 51 Chapter 4 GazeDx: Interactive Gaze Analysis Framework 53 4.1 Design Rationale 54 4.2 Overviews for Comparative Gaze Analysis 57 4.2.1 Spatial Similarity 57 4.2.2 Qualitative Similarity Overview 58 4.2.3 Multi-level Temporal Overview 60 4.3 In-depth Comparison of Gaze Patterns 65 4.3.1 Detail Views for Individual Readers 65 4.3.2 Aggregation for Group Comparison 67 4.4 CIS: Context-embedded Interactive Scatterplot 68 4.4.1 Flexible Axis Configuration 68 4.4.2 Focus Attention with Familiar Representations 69 4.4.3 Scatterplot Matrix with CIS 72 4.5 Interactive Selection and Filtering 72 4.5.1 Selection by Freehand Drawing 73 4.5.2 Selection by Human Anatomy 74 4.6 Implementation 76 4.7 Case Studies 77 4.7.1 Case Study Protocol 78 4.7.2 Apparatus 80 4.7.3 Case Study 1: Chest Radiologists 81 4.7.4 Case Study 2: Abdominal Radiologists 85 4.8 Discussion 88 Chapter 5 Conclusion 91 Bibliography 94 Abstract in Korean 105Docto

    Evaluating Interaction Techniques for Stack Mode Viewing

    No full text
    Three interaction techniques were evaluated for scrolling stack mode displays of volumetric data. Two used a scroll-wheel mouse: one used only the wheel, while another used a โ€œclick and dragโ€ technique for fast scrolling, leaving the wheel for fine adjustments. The third technique used a Shuttle Xpress jog wheel. In a within-subjects design, nine radiologists searched stacked images for simulated hyper-intense regions on brain, knee, and thigh MR studies. Dependent measures were speed, accuracy, navigation path, and user preference. The radiologists considered the task realistic. They had high inter-subject variability in completion times, far larger than the differences between techniques. Most radiologists (eight out of nine) preferred familiar mouse-based techniques. Most participants scanned the data in two passes, first locating anomalies, then scanning for omissions. Participants spent a mean 10.4ย s/trial exploring anomalies, with only mild variation between participants. Their rates of forward navigation searching for anomalies varied much more. Interaction technique significantly affected forward navigation rate (scroll wheel 5.4 slices/s, click and drag 9.4, and jog wheel 6.9). It is not clear what constrained the slowest navigators. The fastest navigator used a unique strategy of moving quickly just beyond an anomaly, then backing up. Eight naรฏve students performed a similar protocol. Their times and variability were similar to the radiologists, but more (three out of eight) students preferred the jog wheel. It may be worthwhile to introduce techniques such as the jog wheel to radiologists during training, and several techniques might be provided on workstations, allowing individuals to choose their preferred method
    corecore