671 research outputs found

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe

    Immersive Visualization in Biomedical Computational Fluid Dynamics and Didactic Teaching and Learning

    Get PDF
    Virtual reality (VR) can stimulate active learning, critical thinking, decision making and improved performance. It requires a medium to show virtual content, which is called a virtual environment (VE). The MARquette Visualization Lab (MARVL) is an example of a VE. Robust processes and workflows that allow for the creation of content for use within MARVL further increases the userbase for this valuable resource. A workflow was created to display biomedical computational fluid dynamics (CFD) and complementary data in a wide range of VE’s. This allows a researcher to study the simulation in its natural three-dimensional (3D) morphology. In addition, it is an exciting way to extract more information from CFD results by taking advantage of improved depth cues, a larger display canvas, custom interactivity, and an immersive approach that surrounds the researcher. The CFD to VR workflow was designed to be basic enough for a novice user. It is also used as a tool to foster collaboration between engineers and clinicians. The workflow aimed to support results from common CFD software packages and across clinical research areas. ParaView, Blender and Unity were used in the workflow to take standard CFD files and process them for viewing in VR. Designated scripts were written to automate the steps implemented in each software package. The workflow was successfully completed across multiple biomedical vessels, scales and applications including: the aorta with application to congenital cardiovascular disease, the Circle of Willis with respect to cerebral aneurysms, and the airway for surgical treatment planning. The workflow was completed by novice users in approximately an hour. Bringing VR further into didactic teaching within academia allows students to be fully immersed in their respective subject matter, thereby increasing the students’ sense of presence, understanding and enthusiasm. MARVL is a space for collaborative learning that also offers an immersive, virtual experience. A workflow was created to view PowerPoint presentations in 3D using MARVL. A resulting Immersive PowerPoint workflow used PowerPoint, Unity and other open-source software packages to display the PowerPoint presentations in 3D. The Immersive PowerPoint workflow can be completed in under thirty minutes

    Creation of a Virtual Atlas of Neuroanatomy and Neurosurgical Techniques Using 3D Scanning Techniques

    Get PDF
    Neuroanatomy is one of the most challenging and fascinating topics within the human anatomy, due to the complexity and interconnection of the entire nervous system. The gold standard for learning neurosurgical anatomy is cadaveric dissections. Nevertheless, it has a high cost (needs of a laboratory, acquisition of cadavers, and fixation), is time-consuming, and is limited by sociocultural restrictions. Due to these disadvantages, other tools have been investigated to improve neuroanatomy learning. Three-dimensional modalities have gradually begun to supplement traditional 2-dimensional representations of dissections and illustrations. Volumetric models (VM) are the new frontier for neurosurgical education and training. Different workflows have been described to create these VMs -photogrammetry (PGM) and structured light scanning (SLS). In this study, we aimed to describe and use the currently available 3D scanning techniques to create a virtual atlas of neurosurgical anatomy. Dissections on post-mortem human heads and brains were performed at the skull base laboratories of Stanford University - NeuroTraIn Center and the University of California, San Francisco - SBCVL (skull base and cerebrovascular laboratory). Then VMs were created following either SLS or PGM workflow. Fiber tract reconstructions were also generated from DICOM using DSI-studio and incorporated into VMs from dissections. Moreover, common creative license materials models were used to simplify the understanding of the specific anatomical region. Both methods yielded VMs with suitable clarity and structural integrity for anatomical education, surgical illustration, and procedural simulation. We described the roadmap of SLS and PGM for creating volumetric models, including the required equipment and software. We have also provided step-by-step procedures on how users can post-processing and refine these images according to their specifications. The VMs generated were used for several publications, to describe the step-by-step of a specific neurosurgical approach and to enhance the understanding of an anatomical region and its function. These models were used in neuroanatomical education and research (workshops and publications). VMs offer a new, immersive, and innovative way to accurately visualize neuroanatomy. Given the straightforward workflow, the presently described techniques may serve as a reference point for an entirely new way of capturing and depicting neuroanatomy and offer new opportunities for the application of VMs in education, simulation, and surgical planning. The virtual atlas, divided into specific areas concerning different neurosurgical approaches (such as skull base, cortex and fiber tracts, and spine operative anatomy), will increase the viewer's understanding of neurosurgical anatomy. The described atlas is the first surgical collection of VMs from cadaveric dissections available in the medical field and could be a used as reference for future creation of analogous collection in the different medical subspeciality.La neuroanatomia è, grazie alle intricate connessioni che caratterizzano il sistema nervoso e alla sua affascinante complessità, una delle discipline più stimolanti della anatomia umana. Nonostante il gold standard per l’apprendimento dell’anatomia neurochirurgica sia ancora rappresentato dalle dissezioni cadaveriche, l’accessibilità a queste ultime rimane limitata, a causa della loro dispendiosità in termini di tempo e costi (necessità di un laboratorio, acquisizione di cadaveri e fissazione), e alle restrizioni socioculturali per la donazione di cadaveri. Al fine di far fronte a questi impedimenti, e con lo scopo di garantire su larga scala l’apprendimento tridimensionale della neuroanatomia, nel corso degli anni sono stati sviluppati nuovi strumenti e tecnologie. Le tradizionali rappresentazioni anatomiche bidimensionali sono state gradualmente sostituite dalle modalità 3-dimensionali (3D) – foto e video. Tra questi ultimi, i modelli volumetrici (VM) rappresentano la nuova frontiera per l'istruzione e la formazione neurochirurgica. Diversi metodi per creare questi VM sono stati descritti, tra cui la fotogrammetria (PGM) e la scansione a luce strutturata (SLS). Questo studio descrive l’utilizzo delle diverse tecniche di scansione 3D grazie alle quali è stato creato un atlante virtuale di anatomia neurochirurgica. Le dissezioni su teste e cervelli post-mortem sono state eseguite presso i laboratori di base cranica di Stanford University -NeuroTraIn Center e dell'Università della California, San Francisco - SBCVL. I VM dalle dissezioni sono stati creati seguendo i metodi di SLS e/o PGM. Modelli di fibra bianca sono stati generate utilizzando DICOM con il software DSI-studio e incorporati ai VM di dissezioni anatomiche. Inoltre, sono stati utilizzati VM tratti da common creative license material (materiale con licenze creative comuni) al fine di semplificare la comprensione di alcune regioni anatomiche. I VM generati con entrambi i metodi sono risultati adeguati, sia in termini di chiarezza che di integrità strutturale, per l’educazione anatomica, l’illustrazione medica e la simulazione chirurgica. Nel nostro lavoro sono stati esaustivamente descritti tutti gli step necessari, di entrambe le tecniche (SLS e PGM), per la creazione di VM, compresi le apparecchiature e i software utilizzati. Sono state inoltre descritte le tecniche di post-elaborazione e perfezionamento dei VM da poter utilizzare in base alle necessità richieste. I VM generati durante la realizzazione del nostro lavoro sono stati utilizzati per molteplici pubblicazioni, nella descrizione step-by-step di uno specifico approccio neurochirurgico o per migliorare la comprensione di una regione anatomica e della sua funzione. Questi modelli sono stati utilizzati a scopo didattico per la formazione neuroanatomica di studenti di medicina, specializzandi e giovani neurochirurghi. I VM offrono un modo nuovo, coinvolgente e innovativo con cui poter raggiungere un’accurata conoscenza tridimensionale della neuroanatomia. La metodologia delle due tecniche descritte può servire come punto di riferimento per un nuovo modo di acquisizione e rappresentazione della neuroanatomia, ed offrire nuove opportunità di utilizzo dei VM nella formazione didattica, nella simulazione e nella pianificazione chirurgica. L'atlante virtuale qui descritto, suddiviso in aree specifiche relative a diversi approcci neurochirurgici, aumenterà la comprensione dell'anatomia neurochirurgica da parte dello spettatore. Questa è la prima raccolta chirurgica di VM da dissezioni anatomiche disponibile in ambito medico e potrebbe essere utilizzato come riferimento per la futura creazione di analoga raccolta nelle diverse sotto specialità mediche

    Augmented Reality in Industry 4.0 and Future Innovation Programs

    Get PDF
    open4noAugmented Reality (AR) is worldwide recognized as one of the leading technologies of the 21st century and one of the pillars of the new industrial revolution envisaged by the Industry 4.0 international program. Several papers describe, in detail, specific applications of Augmented Reality developed to test its potentiality in a variety of fields. However, there is a lack of sources detailing the current limits of this technology in the event of its introduction in a real working environment where everyday tasks could be carried out by operators using an AR-based approach. A literature analysis to detect AR strength and weakness has been carried out, and a set of case studies has been implemented by authors to find the limits of current AR technologies in industrial applications outside the laboratory-protected environment. The outcome of this paper is that, even though Augmented Reality is a well-consolidated computer graphic technique in research applications, several improvements both from a software and hardware point of view should be introduced before its introduction in industrial operations. The originality of this paper lies in the detection of guidelines to improve the Augmented Reality potentialities in factories and industries.openSanti, GM; Ceruti, A; Liverani, A; Osti, FSanti, GM; Ceruti, A; Liverani, A; Osti,

    lifex-ep: a robust and efficient software for cardiac electrophysiology simulations

    Get PDF
    Background: Simulating the cardiac function requires the numerical solution of multi-physics and multi-scale mathematical models. This underscores the need for streamlined, accurate, and high-performance computational tools. Despite the dedicated endeavors of various research teams, comprehensive and user-friendly software programs for cardiac simulations, capable of accurately replicating both normal and pathological conditions, are still in the process of achieving full maturity within the scientific community. Results: This work introduces lifex-ep, a publicly available software for numerical simulations of the electrophysiology activity of the cardiac muscle, under both normal and pathological conditions. lifex-ep employs the monodomain equation to model the heart's electrical activity. It incorporates both phenomenological and second-generation ionic models. These models are discretized using the Finite Element method on tetrahedral or hexahedral meshes. Additionally, lifex-ep integrates the generation of myocardial fibers based on Laplace-Dirichlet Rule-Based Methods, previously released in Africa et al., 2023, within lifex-fiber. As an alternative, users can also choose to import myofibers from a file. This paper provides a concise overview of the mathematical models and numerical methods underlying lifex-ep, along with comprehensive implementation details and instructions for users. lifex-ep features exceptional parallel speedup, scaling efficiently when using up to thousands of cores, and its implementation has been verified against an established benchmark problem for computational electrophysiology. We showcase the key features of lifex-ep through various idealized and realistic simulations conducted in both normal and pathological scenarios. Furthermore, the software offers a user-friendly and flexible interface, simplifying the setup of simulations using self-documenting parameter files. Conclusions: lifex-ep provides easy access to cardiac electrophysiology simulations for a wide user community. It offers a computational tool that integrates models and accurate methods for simulating cardiac electrophysiology within a high-performance framework, while maintaining a user-friendly interface. lifex-ep represents a valuable tool for conducting in silico patient-specific simulations

    Web-based Stereo Rendering for Visualization and Annotation of Scientific Volumetric Data

    Get PDF
    Advancement in high-throughput microscopy technology such as the Knife-Edge Scanning Microscopy (KESM) is enabling the production of massive amounts of high-resolution and high-quality volumetric data of biological microstructures. To fully utilize these data, they should be efficiently distributed to the scientific research community through the Internet and should be easily visualized, annotated, and analyzed. Given the volumetric nature of the data, visualizing them in 3D is important. However, since we cannot assume that every end user has high-end hardware, an approach that has minimal hardware and software requirements will be necessary, such as a standard web browser running on a typical personal computer. There are several web applications that facilitate the viewing of large collections of images. Google Maps and Google Maps-like interfaces such as Brainmaps.org allow users to pan and zoom 2D images efficiently. However, they do not yet support the rendering of volumetric data in their standard web interface. The goal of this thesis is to develop a light-weight volumetric image viewer using existing web technologies such as HTML, CSS and JavaScript while exploiting the properties of stereo vision to facilitate the viewing and annotations of volumetric data. The choice of stereogram over other techniques was made since it allows the usage of raw image stacks produced by the 3D microscope without any extra computation on the data at all. Operations to generate stereo images using 2D image stacks include distance attenuation and binocular disparity. By using HTML and JavaScript that are computationally cheap, we can accomplish both tasks dynamically in a standard web browser, by overlaying the images with intervening semi-opaque layers. The annotation framework has also been implemented and tested. In order for annotation to work in this environment, it should also be in the form of stereogram and should aid the merging of stereo pairs. The current technique allows users to place a mark (dot) on one image stack, and its projected position onto the other image stack is calculated dynamically on the client side. Other extra metadata such as textual descriptions can be entered by the user as well. To cope with the occlusion problem caused by changes in the z direction, the structure traced by the user will be displayed on the side, together with the data stacks. Using the same stereo-gram creation techniques, the traces made by the user is dynamically generated and shown as stereogram. We expect the approach presented in this thesis to be applicable to a broader scientific domain, including geology and meteorology

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun
    • …
    corecore