28,256 research outputs found

    Immersive Visualization for Enhanced Computational Fluid Dynamics Analysis

    Get PDF
    Modern biomedical computer simulations produce spatiotemporal results that are often viewed at a single point in time on standard 2D displays. An immersive visualization environment (IVE) with 3D stereoscopic capability can mitigate some shortcomings of 2D displays via improved depth cues and active movement to further appreciate the spatial localization of imaging data with temporal computational fluid dynamics (CFD) results. We present a semi-automatic workflow for the import, processing, rendering, and stereoscopic visualization of high resolution, patient-specific imaging data, and CFD results in an IVE. Versatility of the workflow is highlighted with current clinical sequelae known to be influenced by adverse hemodynamics to illustrate potential clinical utility

    A framework for applying the principles of depth perception to information visualization

    Get PDF
    Cataloged from PDF version of article.During the visualization of 3D content, using the depth cues selectively to support the design goals and enabling a user to perceive the spatial relationships between the objects are important concerns. In this novel solution, we automate this process by proposing a framework that determines important depth cues for the input scene and the rendering methods to provide these cues. While determining the importance of the cues, we consider the user's tasks and the scene's spatial layout. The importance of each depth cue is calculated using a fuzzy logic-based decision system. Then, suitable rendering methods that provide the important cues are selected by performing a cost-profit analysis on the rendering costs of the methods and their contribution to depth perception. Possible cue conflicts are considered and handled in the system. We also provide formal experimental studies designed for several visualization tasks. A statistical analysis of the experiments verifies the success of our framework

    The Effects of Task, Task Mapping, and Layout Space on User Performance in Information-Rich Virtual Environments

    Get PDF
    How should abstract information be displayed in Information-Rich Virtual Environments (IRVEs)? There are a variety of techniques available, and it is important to determine which techniques help foster a user’s understanding both within and between abstract and spatial information types. Our evaluation compared two such techniques: Object Space and Display Space. Users strongly prefer Display Space over Object Space, and those who use Display Space may perform better. Display Space was faster and more accurate than Object Space for tasks comparing abstract information. Object Space was more accurate for comparisons of spatial information. These results suggest that for abstract criteria, visibility is a more important requirement than perceptual coupling by depth and association cues. They also support the value of perceptual coupling for tasks with spatial criteria

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    A framework for applying the principles of depth perception to information visualization

    Get PDF
    During the visualization of 3D content, using the depth cues selectively to support the design goals and enabling a user to perceive the spatial relationships between the objects are important concerns. In this novel solution, we automate this process by proposing a framework that determines important depth cues for the input scene and the rendering methods to provide these cues. While determining the importance of the cues, we consider the user's tasks and the scene's spatial layout. The importance of each depth cue is calculated using a fuzzy logic-based decision system. Then, suitable rendering methods that provide the important cues are selected by performing a cost-profit analysis on the rendering costs of the methods and their contribution to depth perception. Possible cue conflicts are considered and handled in the system. We also provide formal experimental studies designed for several visualization tasks. A statistical analysis of the experiments verifies the success of our framework. © 2013 ACM

    Supporting Memorization and Problem Solving with Spatial Information Presentations in Virtual Environments

    Get PDF
    While it has been suggested that immersive virtual environments could provide benefits for educational applications, few studies have formally evaluated how the enhanced perceptual displays of such systems might improve learning. Using simplified memorization and problem-solving tasks as representative approximations of more advanced types of learning, we are investigating the effects of providing supplemental spatial information on the performance of learning-based activities within virtual environments. We performed two experiments to investigate whether users can take advantage of a spatial information presentation to improve performance on cognitive processing activities. In both experiments, information was presented either directly in front of the participant or wrapped around the participant along the walls of a surround display. In our first experiment, we found that the spatial presentation caused better performance on a memorization and recall task. To investigate whether the advantages of spatial information presentation extend beyond memorization to higher level cognitive activities, our second experiment employed a puzzle-like task that required critical thinking using the presented information. The results indicate that no performance improvements or mental workload reductions were gained from the spatial presentation method compared to a non-spatial layout for our problem-solving task. The results of these two experiments suggest that supplemental spatial information can support performance improvements for cognitive processing and learning-based activities, but its effectiveness is dependent on the nature of the task and a meaningful use of space

    Collaborative Spatio-temporal Feature Learning for Video Action Recognition

    Full text link
    Spatio-temporal feature learning is of central importance for action recognition in videos. Existing deep neural network models either learn spatial and temporal features independently (C2D) or jointly with unconstrained parameters (C3D). In this paper, we propose a novel neural operation which encodes spatio-temporal features collaboratively by imposing a weight-sharing constraint on the learnable parameters. In particular, we perform 2D convolution along three orthogonal views of volumetric video data,which learns spatial appearance and temporal motion cues respectively. By sharing the convolution kernels of different views, spatial and temporal features are collaboratively learned and thus benefit from each other. The complementary features are subsequently fused by a weighted summation whose coefficients are learned end-to-end. Our approach achieves state-of-the-art performance on large-scale benchmarks and won the 1st place in the Moments in Time Challenge 2018. Moreover, based on the learned coefficients of different views, we are able to quantify the contributions of spatial and temporal features. This analysis sheds light on interpretability of the model and may also guide the future design of algorithm for video recognition.Comment: CVPR 201

    Cascaded Scene Flow Prediction using Semantic Segmentation

    Full text link
    Given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. Many existing approaches use superpixels for regularization, but may predict inconsistent shapes and motions inside rigidly moving objects. We instead assume that scenes consist of foreground objects rigidly moving in front of a static background, and use semantic cues to produce pixel-accurate scene flow estimates. Our cascaded classification framework accurately models 3D scenes by iteratively refining semantic segmentation masks, stereo correspondences, 3D rigid motion estimates, and optical flow fields. We evaluate our method on the challenging KITTI autonomous driving benchmark, and show that accounting for the motion of segmented vehicles leads to state-of-the-art performance.Comment: International Conference on 3D Vision (3DV), 2017 (oral presentation
    corecore