283 research outputs found

    Complexity Reduction in Image-Based Breast Cancer Care

    Get PDF
    The diversity of malignancies of the breast requires personalized diagnostic and therapeutic decision making in a complex situation. This thesis contributes in three clinical areas: (1) For clinical diagnostic image evaluation, computer-aided detection and diagnosis of mass and non-mass lesions in breast MRI is developed. 4D texture features characterize mass lesions. For non-mass lesions, a combined detection/characterisation method utilizes the bilateral symmetry of the breast s contrast agent uptake. (2) To improve clinical workflows, a breast MRI reading paradigm is proposed, exemplified by a breast MRI reading workstation prototype. Instead of mouse and keyboard, it is operated using multi-touch gestures. The concept is extended to mammography screening, introducing efficient navigation aids. (3) Contributions to finite element modeling of breast tissue deformations tackle two clinical problems: surgery planning and the prediction of the breast deformation in a MRI biopsy device

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    The State of the Art of Spatial Interfaces for 3D Visualization

    Get PDF
    International audienceWe survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under-explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction

    Advanced Visualization and Intuitive User Interface Systems for Biomedical Applications

    Get PDF
    Modern scientific research produces data at rates that far outpace our ability to comprehend and analyze it. Such sources include medical imaging data and computer simulations, where technological advancements and spatiotemporal resolution generate increasing amounts of data from each scan or simulation. A bottleneck has developed whereby medical professionals and researchers are unable to fully use the advanced information available to them. By integrating computer science, computer graphics, artistic ability and medical expertise, scientific visualization of medical data has become a new field of study. The objective of this thesis is to develop two visualization systems that use advanced visualization, natural user interface technologies and the large amount of biomedical data available to produce results that are of clinical utility and overcome the data bottleneck that has developed. Computational Fluid Dynamics (CFD) is a tool used to study the quantities associated with the movement of blood by computer simulation. We developed methods of processing spatiotemporal CFD data and displaying it in stereoscopic 3D with the ability to spatially navigate through the data. We used this method with two sets of display hardware: a full-scale visualization environment and a small-scale desktop system. The advanced display and data navigation abilities provide the user with the means to better understand the relationship between the vessel\u27s form and function. Low-cost 3D, depth-sensing cameras capture and process user body motion to recognize motions and gestures. Such devices allow users to use hand motions as an intuitive interface to computer applications. We developed algorithms to process and prepare the biomedical and scientific data for use with a custom control application. The application interprets user gestures as commands to a visualization tool and allows the user to control the visualization of multi-dimensional data. The intuitive interface allows the user to control the visualization of data without manual contact with an interaction device. In developing these methods and software tools we have leveraged recent trends in advanced visualization and intuitive interfaces in order to efficiently visualize biomedical data in such a way that provides meaningful information that can be used to further appreciate it

    Bodies of Seeing: A video ethnography of academic x-ray image interpretation training and professional vision in undergraduate radiology and radiography education

    Get PDF
    This thesis reports on a UK-based video ethnography of academic x-ray image interpretation training across two undergraduate courses in radiology and radiography. By studying the teaching and learning practices of the classroom, I initially explore the professional vision of x-ray image interpretation and how its relation to normal radiographic anatomy founds the practice of being ‘critical’. This criticality accomplishes a faculty of perceptual norms that is coded and organised and also, therefore, of a specific radiological vision. Professionals’ commitment to the cognitivist rhetoric of ‘looking at’/‘pattern recognition’ builds this critical perception, a perception that deepens in organisation when professionals endorse a ‘systematic approach’ that mediates matter-of-fact thoroughness and offers a helpful critical commentary towards the image. In what follows, I explore how x-ray image interpretation is constituted in case presentations. During training, x-ray images are treated with suspicion and as misleading and are aligned with a commitment to discursive contexts of ‘missed abnormality’, ‘interpretive risk’, and ‘technical error’. The image is subsequently constructed as ambiguous and that what is shown cannot be taken at face value. This interconnects with reenacting ideals around ‘seeing clearly’ that are explained through the teaching practices and material world of the academic setting and how, if misinterpretation is established, the ambiguity of the image is reduced by embodied gestures and technoscientific knowledge. By making this correction, the ambiguous image is reenacted and the misinterpretation of image content is explained. To conclude, I highlight how the professional vision of academic x-ray image interpretation prepares students for the workplace, shapes the classificatory interpretation of ab(normal) anatomy, manages ambiguity through embodied expectations and bodily norms, and cultivates body-machine relations

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects

    Evaluation of haptic virtual reality user interfaces for medical marking on 3D models

    Get PDF
    Three-dimensional (3D) visualization has been widely used in computer-aided medical diagnosis and planning. To interact with 3D models, current user interfaces in medical systems mainly rely on the traditional 2D interaction techniques by employing a mouse and a 2D display. There are promising haptic virtual reality (VR) interfaces which can enable intuitive and realistic 3D interaction by using VR equipment and haptic devices. However, the practical usability of the haptic VR interfaces in this medical field remains unexplored. In this study, we propose two haptic VR interfaces, a vibrotactile VR interface and a kinesthetic VR interface, for medical diagnosis and planning on volumetric medical images. The vibrotactile VR interface used a head-mounted VR display as the visual output channel and a VR controller with vibrotactile feedback as the manipulation tool. Similarly, the kinesthetic VR interface used a head-mounted VR display as the visual output channel and a kinesthetic force-feedback device as the manipulation tool. We evaluated these two VR interfaces in an experiment involving medical marking on 3D models, by comparing them with the present state-of-the-art 2D interface as the baseline. The results showed that the kinesthetic VR interface performed the best in terms of marking accuracy, whereas the vibrotactile VR interface performed the best in terms of task completion time. Overall, the participants preferred to use the kinesthetic VR interface for the medical task.acceptedVersionPeer reviewe

    Visual interaction : between vision and action

    Get PDF
    • …
    corecore