1,095 research outputs found

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    The screen as boundary object in the realm of imagination

    Get PDF
    As an object at the boundary between virtual and physical reality, the screen exists both as a displayer and as a thing displayed, thus functioning as a mediator. The screen's virtual imagery produces a sense of immersion in its viewer, yet at the same time the materiality of the screen produces a sense of rejection from the viewer's complete involvement in the virtual world. The experience of the screen is thus an oscillation between these two states of immersion and rejection. Nowadays, as interactivity becomes a central component of the relationship between viewers and many artworks, the viewer experience of the screen is changing. Unlike the screen experience in non-interactive artworks, such as the traditional static screen of painting or the moving screen of video art in the 1970s, interactive media screen experiences can provide viewers with a more immersive, immediate, and therefore, more intense experience. For example, many digital media artworks provide an interactive experience for viewers by capturing their face or body though real-time computer vision techniques. In this situation, as the camera and the monitor in the artwork encapsulate the interactor's body in an instant feedback loop, the interactor becomes a part of the interface mechanism and responds to the artwork as the system leads or even provokes them. This thesis claims that this kind of direct mirroring in interactive screen-based media artworks does not allow the viewer the critical distance or time needed for self-reflection. The thesis examines the previous aesthetics of spatial and temporal perception, such as presentness and instantaneousness, and the notions of passage and of psychological perception such as reflection, reflexiveness and auratic experience, looking at how these aesthetics can be integrated into new media screen experiences. Based on this theoretical research, the thesis claims that interactive screen spaces can act as a site for expression and representation, both through a doubling effect between the physical and virtual worlds, and through manifold spatial and temporal mappings with the screen experience. These claims are further supported through exploration of screen-based media installations created by the author since 2003.Ph.D.Committee Chair: Mazalek, Ali; Committee Member: Bolter, Jay David; Committee Member: Do, Ellen Yi-Luen; Committee Member: Nitsche, Michael; Committee Member: Winegarden, Claudia R

    Detection of changes through visual alerts and comparisons using a multi-layered display.

    Get PDF
    The Multi-Layered Displays (MLD) comprise two LCD screens mounted one in front of the other, allowing the presentation of information on both screens. This physical separation produces depth without requiring glasses. This research evaluated the utility of the MLD for change detection tasks, particularly in operational environments. Change Blindness refers to the failure to detect changes when the change happens during a visual disruption. The literature equates these visual disruptions with the types of interruptions that occur regularly in work situations. Change blindness is more likely to occur when operators monitor dynamic situations spread over several screens, when there are popup messages, and when there are frequent interruptions which are likely to block the visual transients that signal a change. Four laboratory experiments were conducted to evaluate the utility of the MLD for change detection tasks. The results from the experiments revealed that, when depth is used as a visual cue, the depth of the MLD has a different effect on the detection of expected changes and unexpected events. When the depth of the MLD is used as a comparison tool, the detection of differences is limited to translation differences in simple stimuli with a white background. These results call into question previous claims made for the MLD regarding operational change detection. In addition, observations and interviews were used to explore whether change blindness occurred in an operational command room. The results suggested that operators develop strategies to recover from interruptions and multitasking. These results call into doubt the wisdom of applying change detection theories to real world operational settings. More importantly, the research serves as a reminder that cognitive limitations found in the laboratory are not always found in real world environments

    Detection of changes through visual alerts and comparisons using a multi-layered display

    Get PDF
    The Multi-Layered Displays (MLD) comprise two LCD screens mounted one in front of the other, allowing the presentation of information on both screens. This physical separation produces depth without requiring glasses. This research evaluated the utility of the MLD for change detection tasks, particularly in operational environments. Change Blindness refers to the failure to detect changes when the change happens during a visual disruption. The literature equates these visual disruptions with the types of interruptions that occur regularly in work situations. Change blindness is more likely to occur when operators monitor dynamic situations spread over several screens, when there are popup messages, and when there are frequent interruptions which are likely to block the visual transients that signal a change. Four laboratory experiments were conducted to evaluate the utility of the MLD for change detection tasks. The results from the experiments revealed that, when depth is used as a visual cue, the depth of the MLD has a different effect on the detection of expected changes and unexpected events. When the depth of the MLD is used as a comparison tool, the detection of differences is limited to translation differences in simple stimuli with a white background. These results call into question previous claims made for the MLD regarding operational change detection. In addition, observations and interviews were used to explore whether change blindness occurred in an operational command room. The results suggested that operators develop strategies to recover from interruptions and multitasking. These results call into doubt the wisdom of applying change detection theories to real world operational settings. More importantly, the research serves as a reminder that cognitive limitations found in the laboratory are not always found in real world environments

    DESIGN OF A GAIT ACQUISITION AND ANALYSIS SYSTEM FOR ASSESSING THE RECOVERY OF MICE POST-SPINAL CORD INJURY

    Get PDF
    Current methods of determining spinal cord recovery in mice, post-directed injury, are qualitative measures. This is due to the small size and quickness of mice. This thesis presents a design for a gait acquisition and analysis system able to capture the footfalls of a mouse, extract position and timing data, and report quantitative gait metrics to the operator. These metrics can then be used to evaluate the recovery of the mouse. This work presents the design evolution of the system, from initial sensor design concepts through prototyping and testing to the final implementation. The system utilizes a machine vision camera, a well-designed walkway enclosure, and image processing techniques to capture and analyze paw strikes. Quantitative results gained from live animal experiments are presented, and it is shown how the measurements can be used to determine healthy, injured, and recovered gait

    A Methodology for the Design of Greenhouses with Semi-Transparent Photovoltaic Cladding and Artificial Lighting

    Get PDF
    Greenhouse construction is on the rise in response to a growing demand for fresh local produce and the need for a climate resilient food web. In mid-to-high latitude locations, greenhouses that control light to a consistent daily integral can produce crops year-round by employing heating, horticultural lighting and movable screens. Their energy consumption represents a major production cost and is largely dictated by the envelope design. As an increasing number of envelope materials (including energy generating photovoltaic cladding) become available, methods for determining the most efficient design are needed. A methodology was developed to assist in identifying the most suitable envelope design from a set of alternatives. First, the energy performance was assessed by conducting integrated thermal-daylighting analysis using building energy simulation software. Then, life cycle cost analysis was employed to determine the most cost-effective design. The methodology was applied to the following three case studies for a mid-latitude (Ottawa (45.4°N), Canada) and a high-latitude location (Whitehorse (60.7°N), Canada): 1) semi-transparent photovoltaic cladding (STPV) applied to the roof; 2) comparison of a glass, polycarbonate and opaque insulation on the walls and roof; and 3) design of ground thermal insulation. For Ottawa, the STPV cladding caused internal shading that was counteracted by augmenting supplemental lighting by as much as 84%, which in turn reduced heating energy use by up to 12%. Although STPV cladding increased lighting electricity use, it generated 44% of the electricity that was consumed for supplemental lighting in the present study and 107% in the future projection study. Currently, STPV cladding is not an economically attractive investment unless time-of-use (TOU) electricity pricing is available. However, in the future, a 23% and 37% reduction in life cycle cost (LCC) was achieved for constant and TOU electricity pricing, respectively. STPV will increasingly become a promising cladding alternative for improving energy efficiency and economics of greenhouse operations. By reflecting light onto the crops, an insulated and reflective opaque north wall can lower both lighting electricity and heating energy consumption, while reducing the LCC by 2.6%. The use of ground insulation had a positive albeit negligible impact on energy and economic performance

    Surface Shape Perception in Volumetric Stereo Displays

    Get PDF
    In complex volume visualization applications, understanding the displayed objects and their spatial relationships is challenging for several reasons. One of the most important obstacles is that these objects can be translucent and can overlap spatially, making it difficult to understand their spatial structures. However, in many applications, for example medical visualization, it is crucial to have an accurate understanding of the spatial relationships among objects. The addition of visual cues has the potential to help human perception in these visualization tasks. Descriptive line elements, in particular, have been found to be effective in conveying shape information in surface-based graphics as they sparsely cover a geometrical surface, consistently following the geometry. We present two approaches to apply such line elements to a volume rendering process and to verify their effectiveness in volume-based graphics. This thesis reviews our progress to date in this area and discusses its effects and limitations. Specifically, it examines the volume renderer implementation that formed the foundation of this research, the design of the pilot study conducted to investigate the effectiveness of this technique, the results obtained. It further discusses improvements designed to address the issues revealed by the statistical analysis. The improved approach is able to handle visualization targets with general shapes, thus making it more appropriate to real visualization applications involving complex objects

    The matrix revisited: A critical assessment of virtual reality technologies for modeling, simulation, and training

    Get PDF
    A convergence of affordable hardware, current events, and decades of research have advanced virtual reality (VR) from the research lab into the commercial marketplace. Since its inception in the 1960s, and over the next three decades, the technology was portrayed as a rarely used, high-end novelty for special applications. Despite the high cost, applications have expanded into defense, education, manufacturing, and medicine. The promise of VR for entertainment arose in the early 1990\u27s and by 2016 several consumer VR platforms were released. With VR now accessible in the home and the isolationist lifestyle adopted due to the COVID-19 global pandemic, VR is now viewed as a potential tool to enhance remote education. Drawing upon over 17 years of experience across numerous VR applications, this dissertation examines the optimal use of VR technologies in the areas of visualization, simulation, training, education, art, and entertainment. It will be demonstrated that VR is well suited for education and training applications, with modest advantages in simulation. Using this context, the case is made that VR can play a pivotal role in the future of education and training in a globally connected world
    • …
    corecore