4 research outputs found

    Evaluation of gesture based interfaces for medical volume visualization tasks

    Get PDF
    Interactive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction methods

    2D-3D registration using gradient-based MI for image guided surgery systems

    No full text
    Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution. © 2011 SPIE

    Evaluation of Gesture Based Interfaces for Medical Volume Visualization Tasks

    No full text
    International audienceInteractive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction method
    corecore