270 research outputs found

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Interaction and locomotion techniques for the exploration of massive 3D point clouds in vr environments

    Get PDF
    Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90 fps instead of 30–60 fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach

    Interactive Visual Analytics for Large-scale Particle Simulations

    Get PDF
    Particle based model simulations are widely used in scientific visualization. In cosmology, particles are used to simulate the evolution of dark matter in the universe. Clusters of particles (that have special statistical properties) are called halos. From a visualization point of view, halos are clusters of particles, each having a position, mass and velocity in three dimensional space, and they can be represented as point clouds that contain various structures of geometric interest such as filaments, membranes, satellite of points, clusters, and cluster of clusters. The thesis investigates methods for interacting with large scale data-sets represented as point clouds. The work mostly aims at the interactive visualization of cosmological simulation based on large particle systems. The study consists of three components: a) two human factors experiments into the perceptual factors that make it possible to see features in point clouds; b) the design and implementation of a user interface making it possible to rapidly navigate through and visualize features in the point cloud, c) software development and integration to support visualization

    Creation of a Virtual Atlas of Neuroanatomy and Neurosurgical Techniques Using 3D Scanning Techniques

    Get PDF
    Neuroanatomy is one of the most challenging and fascinating topics within the human anatomy, due to the complexity and interconnection of the entire nervous system. The gold standard for learning neurosurgical anatomy is cadaveric dissections. Nevertheless, it has a high cost (needs of a laboratory, acquisition of cadavers, and fixation), is time-consuming, and is limited by sociocultural restrictions. Due to these disadvantages, other tools have been investigated to improve neuroanatomy learning. Three-dimensional modalities have gradually begun to supplement traditional 2-dimensional representations of dissections and illustrations. Volumetric models (VM) are the new frontier for neurosurgical education and training. Different workflows have been described to create these VMs -photogrammetry (PGM) and structured light scanning (SLS). In this study, we aimed to describe and use the currently available 3D scanning techniques to create a virtual atlas of neurosurgical anatomy. Dissections on post-mortem human heads and brains were performed at the skull base laboratories of Stanford University - NeuroTraIn Center and the University of California, San Francisco - SBCVL (skull base and cerebrovascular laboratory). Then VMs were created following either SLS or PGM workflow. Fiber tract reconstructions were also generated from DICOM using DSI-studio and incorporated into VMs from dissections. Moreover, common creative license materials models were used to simplify the understanding of the specific anatomical region. Both methods yielded VMs with suitable clarity and structural integrity for anatomical education, surgical illustration, and procedural simulation. We described the roadmap of SLS and PGM for creating volumetric models, including the required equipment and software. We have also provided step-by-step procedures on how users can post-processing and refine these images according to their specifications. The VMs generated were used for several publications, to describe the step-by-step of a specific neurosurgical approach and to enhance the understanding of an anatomical region and its function. These models were used in neuroanatomical education and research (workshops and publications). VMs offer a new, immersive, and innovative way to accurately visualize neuroanatomy. Given the straightforward workflow, the presently described techniques may serve as a reference point for an entirely new way of capturing and depicting neuroanatomy and offer new opportunities for the application of VMs in education, simulation, and surgical planning. The virtual atlas, divided into specific areas concerning different neurosurgical approaches (such as skull base, cortex and fiber tracts, and spine operative anatomy), will increase the viewer's understanding of neurosurgical anatomy. The described atlas is the first surgical collection of VMs from cadaveric dissections available in the medical field and could be a used as reference for future creation of analogous collection in the different medical subspeciality.La neuroanatomia è, grazie alle intricate connessioni che caratterizzano il sistema nervoso e alla sua affascinante complessità, una delle discipline più stimolanti della anatomia umana. Nonostante il gold standard per l’apprendimento dell’anatomia neurochirurgica sia ancora rappresentato dalle dissezioni cadaveriche, l’accessibilità a queste ultime rimane limitata, a causa della loro dispendiosità in termini di tempo e costi (necessità di un laboratorio, acquisizione di cadaveri e fissazione), e alle restrizioni socioculturali per la donazione di cadaveri. Al fine di far fronte a questi impedimenti, e con lo scopo di garantire su larga scala l’apprendimento tridimensionale della neuroanatomia, nel corso degli anni sono stati sviluppati nuovi strumenti e tecnologie. Le tradizionali rappresentazioni anatomiche bidimensionali sono state gradualmente sostituite dalle modalità 3-dimensionali (3D) – foto e video. Tra questi ultimi, i modelli volumetrici (VM) rappresentano la nuova frontiera per l'istruzione e la formazione neurochirurgica. Diversi metodi per creare questi VM sono stati descritti, tra cui la fotogrammetria (PGM) e la scansione a luce strutturata (SLS). Questo studio descrive l’utilizzo delle diverse tecniche di scansione 3D grazie alle quali è stato creato un atlante virtuale di anatomia neurochirurgica. Le dissezioni su teste e cervelli post-mortem sono state eseguite presso i laboratori di base cranica di Stanford University -NeuroTraIn Center e dell'Università della California, San Francisco - SBCVL. I VM dalle dissezioni sono stati creati seguendo i metodi di SLS e/o PGM. Modelli di fibra bianca sono stati generate utilizzando DICOM con il software DSI-studio e incorporati ai VM di dissezioni anatomiche. Inoltre, sono stati utilizzati VM tratti da common creative license material (materiale con licenze creative comuni) al fine di semplificare la comprensione di alcune regioni anatomiche. I VM generati con entrambi i metodi sono risultati adeguati, sia in termini di chiarezza che di integrità strutturale, per l’educazione anatomica, l’illustrazione medica e la simulazione chirurgica. Nel nostro lavoro sono stati esaustivamente descritti tutti gli step necessari, di entrambe le tecniche (SLS e PGM), per la creazione di VM, compresi le apparecchiature e i software utilizzati. Sono state inoltre descritte le tecniche di post-elaborazione e perfezionamento dei VM da poter utilizzare in base alle necessità richieste. I VM generati durante la realizzazione del nostro lavoro sono stati utilizzati per molteplici pubblicazioni, nella descrizione step-by-step di uno specifico approccio neurochirurgico o per migliorare la comprensione di una regione anatomica e della sua funzione. Questi modelli sono stati utilizzati a scopo didattico per la formazione neuroanatomica di studenti di medicina, specializzandi e giovani neurochirurghi. I VM offrono un modo nuovo, coinvolgente e innovativo con cui poter raggiungere un’accurata conoscenza tridimensionale della neuroanatomia. La metodologia delle due tecniche descritte può servire come punto di riferimento per un nuovo modo di acquisizione e rappresentazione della neuroanatomia, ed offrire nuove opportunità di utilizzo dei VM nella formazione didattica, nella simulazione e nella pianificazione chirurgica. L'atlante virtuale qui descritto, suddiviso in aree specifiche relative a diversi approcci neurochirurgici, aumenterà la comprensione dell'anatomia neurochirurgica da parte dello spettatore. Questa è la prima raccolta chirurgica di VM da dissezioni anatomiche disponibile in ambito medico e potrebbe essere utilizzato come riferimento per la futura creazione di analoga raccolta nelle diverse sotto specialità mediche

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Interactive Three-Dimensional Simulation and Visualisation of Real Time Blood Flow in Vascular Networks

    Get PDF
    One of the challenges in cardiovascular disease management is the clinical decision-making process. When a clinician is dealing with complex and uncertain situations, the decision on whether or how to intervene is made based upon distinct information from diverse sources. There are several variables that can affect how the vascular system responds to treatment. These include: the extent of the damage and scarring, the efficiency of blood flow remodelling, and any associated pathology. Moreover, the effect of an intervention may lead to further unforeseen complications (e.g. another stenosis may be “hidden” further along the vessel). Currently, there is no tool for predicting or exploring such scenarios. This thesis explores the development of a highly adaptive real-time simulation of blood flow that considers patient specific data and clinician interaction. The simulation should model blood realistically, accurately, and through complex vascular networks in real-time. Developing robust flow scenarios that can be incorporated into the decision and planning medical tool set. The focus will be on specific regions of the anatomy, where accuracy is of the utmost importance and the flow can develop into specific patterns, with the aim of better understanding their condition and predicting factors of their future evolution. Results from the validation of the simulation showed promising comparisons with the literature and demonstrated a viability for clinical use

    Multimodal metaphors for generic interaction tasks in virtual environments

    Full text link
    Virtual Reality (VR) Systeme bieten zusätzliche Ein- und Ausgabekanäle für die Interaktion zwischen Mensch und Computer in virtuellen Umgebungen. Solche VR Technologien ermöglichen den Anwendern bessere Einblicke in hochkomplexe Datenmengen, stellen allerdings auch hohe Anforderungen an den Benutzer bezüglich der Fähigkeiten mit virtuellen Objekten zu interagieren. In dieser Arbeit werden sowohl die Entwicklung und Evaluierung neuer multimodaler Interaktionsmetaphern für generische Interaktionsaufgaben in virtuellen Umgebungen vorgestellt und diskutiert. Anhand eines VR Systems wird der Einsatz dieser Konzepte an zwei Fallbeispielen aus den Domänen der 3D-Stadtvisualisierung und seismischen Volumendarstellung aufgezeigt

    Creating a Virtual Mirror for Motor Learning in Virtual Reality

    Get PDF
    Waltemate T. Creating a Virtual Mirror for Motor Learning in Virtual Reality. Bielefeld: Universität Bielefeld; 2018

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm
    • …
    corecore