51 research outputs found

    The Design of a Graphics Engine for the Development of Virtual Reality Applications

    Get PDF
    This work presents the design and the features of a flexible realtime 3D graphics engine aimed at the development of multimedia applications and collaborative virtual environments. The engine, called EnCIMA (Engine for Collaborative andImmersive Multimedia Applications), enables a fast development process of applications by providing a high level interface, which has been implemented using the C++object-oriented programming paradigm. The main features of the proposed engine are the support to scene management, ability to load static and animated 3D models, particle system effects, network connection management to support collaboration, and collision detection. In addition, the engine supports several specialized interaction devices such as 3D mice, haptic devices, 3D motion trackers, data-gloves, and joystickswith and without force feedback. The engine also enables the developer to choose the way the scene should be rendered to, i.e. using standard display devices, stereoscopy, or even several simultaneous projection for spatially immersive devices. As part of the evaluation process, we have compared the performance of EnCIMA to a game engine and two scene graph toolkits, through the use of a testbed application. The performanceresults and the wide variety of non-conventional interaction devices supported are evidences that EnCIMA can be considered a real time virtual reality engine

    Collaborative visualization and virtual reality in construction projects

    Get PDF
    In the Colombian construction industry it is recognized as a general practice that di!erent designers deliver 2D drawings to the project construction team -- Some 3D modeling applications are used but only with commercial intentions, thus wasting visualization tools that facilitate the understanding of the project, that allow the coordination of plans between di!erent specialists, and that can prevent errors with high impact on costs in the construction phase of the project -- As a continuation of the project "immersive virtual reality for construction" developed by EAFIT University, the present work intends to demonstrate how a collaborative virtual environment can be helpful in order to improve visualization of construction projects and achieve the interaction of di!erent specialties, evaluating the impact of collaborative work in the design process of the same -- The end result of this research is an application created using freely available tools and a use case scenario on how this application can be used to perform review meetings by di!erent specialist in real time -- Initial test on the system has been made with civil engineering students showing that this virtual reality tool ease the burden of performing reviews where traditionally plans and sharing the same geographical space were neede

    Advanced Visualization and Intuitive User Interface Systems for Biomedical Applications

    Get PDF
    Modern scientific research produces data at rates that far outpace our ability to comprehend and analyze it. Such sources include medical imaging data and computer simulations, where technological advancements and spatiotemporal resolution generate increasing amounts of data from each scan or simulation. A bottleneck has developed whereby medical professionals and researchers are unable to fully use the advanced information available to them. By integrating computer science, computer graphics, artistic ability and medical expertise, scientific visualization of medical data has become a new field of study. The objective of this thesis is to develop two visualization systems that use advanced visualization, natural user interface technologies and the large amount of biomedical data available to produce results that are of clinical utility and overcome the data bottleneck that has developed. Computational Fluid Dynamics (CFD) is a tool used to study the quantities associated with the movement of blood by computer simulation. We developed methods of processing spatiotemporal CFD data and displaying it in stereoscopic 3D with the ability to spatially navigate through the data. We used this method with two sets of display hardware: a full-scale visualization environment and a small-scale desktop system. The advanced display and data navigation abilities provide the user with the means to better understand the relationship between the vessel\u27s form and function. Low-cost 3D, depth-sensing cameras capture and process user body motion to recognize motions and gestures. Such devices allow users to use hand motions as an intuitive interface to computer applications. We developed algorithms to process and prepare the biomedical and scientific data for use with a custom control application. The application interprets user gestures as commands to a visualization tool and allows the user to control the visualization of multi-dimensional data. The intuitive interface allows the user to control the visualization of data without manual contact with an interaction device. In developing these methods and software tools we have leveraged recent trends in advanced visualization and intuitive interfaces in order to efficiently visualize biomedical data in such a way that provides meaningful information that can be used to further appreciate it

    Web-based Stereoscopic Collaboration for Medical Visualization

    Get PDF
    Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden. Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt? Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich. Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: ¿ Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.¿ Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.¿ Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    Using natural user interfaces to support learning environments

    Full text link
    [EN] Education is a field of research in which Natural User Interfaces (NUI) have not been exploited. NUI can help in the learning process, specially when using with children. Nowadays, children are growing up playing with computer games, using mobile devices, and other technological devices. New learning methods that use these new technologies can help in the learning process. In this thesis, a new system that uses NUI for learning about a period of history has been developed. This system uses autostereoscopy that lets the children see themselves as a background in the game, and that renders the elements with 3D sensation without the need for special glasses. This system has been developed from scratch. The Microsoft Kinect is used for interaction. A study for comparing the autostereoscopic system with a similar frontal projected system was carried out. This study analyzed different aspects such as engagement, increase of knowledge, or preferences. A total of 162 children from 8 to 11 years old participated in the study. From the results, we observed that the different characteristics of the systems did not influence the children’s acquired knowledge, engagement, or satisfaction; we also observed that the systems are specially suitable for boys and older children (9-11 years old). There were statistically significant differences for depth perception and presence in which the autostereoscopic system was scored higher. However, of the two systems, the children considered the frontal projection to be easier to use. Another comparative study was performed to determine the mode in which the children learn more about the topic of the game. The two modes compared were the collaborative mode, where the children played with the game in couples; and the individual mode, where the children played with the game solo. A total of 46 children from 7 to 10 years old participated in this study. From the results, we observed that there were statistically significant differences between playing with the game in the two modes. The children who played with the game in couples in the collaborative mode got better knowledge scores than the children who played with the game individually. Finally, we would like to highlight that the scores for all the questions were very high. The results from the two studies suggest that games of this kind could be appropriate educational games and that autostereoscopy is a technology to exploit in their development.[ES] La educación es un campo de investigación en el que las Interfaces de Usuario Naturales (NUI) no se han explotado demasiado. Las NUI pueden ser útiles en el proceso de aprendizaje, especialmente cuando se trata de niños. A día de hoy, los niños crecen jugando con juegos de ordenador, utilizando dispositivos móviles y otros dispositivos tecnológicos. Con nuevos métodos que utilicen alguna de estas nuevas tecnologías se podría mejorar el proceso de aprendizaje. En esta tesina se ha desarrollado un nuevo sistema que utiliza la tecnología NUI para aprender sobre un periodo de la historia. Este sistema utiliza la visión autoestereoscópica, la cual permite a los niños verse a ellos mismos en el fondo de pantalla del juego, y que tiene la capacidad de visualizar los elementos del juego con una sensación 3D sin la necesidad de utilizar gafas especiales. Este sistema ha sido desarrollado desde cero como la parte de programación para esta tesina. El dispositivo Microsoft Kinect ha sido utilizado para la interacción. También se ha llevado a cabo un estudio comparativo con un sistema similar basado en proyección frontal. Este estudio ha tenido en cuenta diferentes aspectos como la satisfacción, cuánto han aprendido mientras jugaban o sus preferencias. Un total de 162 niños de 8 a 11 años han participado en este estudio. Por los resultados, observamos que las diferentes características de los sistemas no han influido en el aprendizaje, en la usabilidad o en la satisfacción; también observamos que los sistemas son especialmente apropiados para chicos y niños mayores (de 9 a 11 años). Se han observado diferencias estadísticamente significativas en la percepción de la profundidad, donde el sistema autoesterescópico fue puntuado mejor. Sin embargo, de los dos sistemas, los niños consideraron que el sistema de proyección frontal era más fácil de manejar. También se ha realizado otro estudio para determinar el modo con el que los niños pueden aprender el tema del juego a un mayor nivel. Los dos modos comparados han sido el modo colaborativo, en el que los niños jugaban por parejas; y el modo individual, en el que los niños jugaban solos. Un total de 46 niños de 7 a 10 años han participado en este estudio. Por los resultados, observamos que existen diferencias estadísticamente significativas entre jugar al juego de un modo o de otro. Los niños que jugaron al juego en parejas en el modo colaborativo obtuvieron un mejor resultado que los niños que jugaron al juego en el modo individual. Finalmente, queremos también señalar que las puntuaciones para todas las preguntas han sido muy altas. Los resultados de estos dos estudios sugieren que los juegos de este tipo pueden ser apropiados para la educación y que la autoestereoscopía es una tecnología a explotar en el desarrollo de juegos educativos.Martín San José, JF. (2012). Using natural user interfaces to support learning environments. http://hdl.handle.net/10251/44852Archivo delegad

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: ¿ Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.¿ Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.¿ Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories

    Get PDF
    There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings
    • …
    corecore