4 research outputs found

    ScaleTrotter: Illustrative Visual Travels Across Negative Scales

    Full text link
    We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Distance: a framework for improving spatial cognition within digital architectural models

    Get PDF
    This research investigates the need for improvements to navigation tools and locational awareness within digital architectural models so that users’ spatial cognition can be enhanced. Evidence shows that navigation and disorientation are common problems within digital architectural models, often impairing spatial cognition. When a designer or contractor explores a completed digital architectural model for the first time, it can be a progressively frustrating experience, often leading to the creation of an incorrect cognitive map of the building design. A reflective practice research method across three project-based design investigations is used drawing on aspects of architectural communication, digital interaction, and spatial cognition. The first investigation, Translation projects, explores the transformation of two- dimensional drawing conventions into three-dimensional interactive digital models, exposing the need for improved navigation and wayfinding. The second investigation, a series of artificial intelligence navigation projects, explores navigation methods to aid spatial cognition by providing tools that help to visualise the navigation process, paths to travel, and paths travelled. The third and final investigation, Distance projects, demonstrates the benefits of productive transition in the creation of cognitive maps. During the transition, assistance is given to aid the estimation of distance. The original contribution to knowledge that this research establishes is a framework for navigation tools and wayshowing strategies for improving spatial cognition within digital architectural models. The consideration of wayshowing methods, focusing on spatial transitions beyond predefined views of the digital model, provides a strong method for aiding users to construct comprehensive cognitive maps. This research addresses the undeveloped field of aiding distance estimation inside digital architectural models.There is a need to improve spatial cognition by understanding distance, detail, data, and design when reviewing digital architectural models
    corecore