246 research outputs found

    Interactive visualization of a thin disc around a Schwarzschild black hole

    Full text link
    In the first course of general relativity, the Schwarzschild spacetime is the most discussed analytic solution to Einstein's field equations. Unfortunately, there is rarely enough time to study the optical consequences of the bending of light for some advanced examples. In this paper, we present how the visual appearance of a thin disc around a Schwarzschild black hole can be determined interactively by means of an analytic solution to the geodesic equation processed on current high performance graphical processing units. This approach can, in principle, be customized for any other thin disc in a spacetime with geodesics given in closed form. The interactive visualization discussed here can be used either in a first course of general relativity for demonstration purposes only or as a thesis for an enthusiastic student in an advanced course with some basic knowledge of OpenGL and a programming language.Comment: 9 pages, 4 figure

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    GL-Socket: A CG plugin-based framework for teaching and assessment

    Get PDF
    In this paper we describe a plugin-based C++ framework for teaching OpenGL and GLSL in introductory Computer Graphics courses. The main strength of the framework architecture is that student assignments are mostly independent and thus can be completed, tested and evaluated in any order. When students complete a task, the plugin interface forces a clear separation of initialization, interaction and drawing code, which in turn facilitates code reusability. Plugin code can access scene, camera, and OpenGL window methods through a simple API. The plugin interface is flexible enough to allow students to complete tasks requiring shader development, object drawing, and multiple rendering passes. Students are provided with sample plugins with basic scene drawing and camera control features. One of the plugins that the students receive contains a shader development framework with self-assessment features. We describe the lessons learned after using the tool for four years in a Computer Graphics course involving more than one hundred Computer Science students per year.Peer ReviewedPostprint (published version

    HMI for Interactive 3D Images with Integration of Industrial Process Control

    Get PDF
    This paper presents interactive 3D image use for HMI in industrial process control application. Visualization information has a very important role in industrial process. HMI (Human Machine Interface) is a device used in industries for GUI based display and control of the industrial processes. The goal is user should be able to view simple process through 3D images, interactive way and utilize touch based GUI to control process and change its behavior at run time. HMI currently implemented using 2D images to display information about the industrial process. This paper describes approaches for 3D images that are interactive and uses for controlling industrial process. OpenGL is used to render 3D graphics and QML (Qt Modeling Language) provide functionality for user interface using the Qt cross platform framework. Qt3D library provides a set of APIs to make 3D graphics programming easy and declarative. The developed system will be extended to integrate industrial process control application. Industrial Process communicates with target hardware using Modbus protocol. DOI: 10.17762/ijritcc2321-8169.15052

    Drishti, a volume exploration and presentation tool

    Get PDF
    Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations

    Accelerated Graphical User Interfaces

    Get PDF
    Tato práce je zaměřena na multiplatformní grafická uživatelské rozhraní a jejich hardwarovou akceleraci. Popisuje, co to uživatelské rozhraní jsou a srovnává nástroje na jejich tvorbu  a způsoby jejich realizace. Hlavním bodem je vlastní návrh a implementace nástroje na tvorbu multiplatformních hardwarově akcelerovaných grafických uživatelských rozhraní. Srovnává vlastní koncept s existujícími řešeními, a uvádí ho do praxe na projektu s externí firmou.This thesis is focused on a multi-platform graphical user interface and its hardware acceleration. It describes what the user interfaces are, it compares the tools used for their creation, and the methods of their realization. The main focus is a custom design and implementation of tools used for creating a cross-platform hardware accelerated graphical user interface. It compares my own concept with existing solutions, and places it into practice on a project with an external company.

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Interactive Ray Tracing Infrastructure

    Get PDF
    In this thesis, I present an approach to develop interactive ray tracing infrastructures for artists. An advantage of ray-tracing is that it provides some essential global illumination (GI) effects such as reflection, refraction and shadows, which are essential for artistic applications. My approach relies on massively paralleled computing power of Graphics Processing Unit (GPU) that can help achieve interactive rendering by providing several orders of magnitude faster computation than conventional CPU-based (Central Processing Unit) rendering. GPU-based rendering makes real time manipulation possible which is also essential for artistic applications. Based on this approach, I have developed an interactive ray tracing infrastructure as a proof of concept. Using this ray tracing infrastructure, artists can interactively manipulate shading and lighting effects through provided Graphical User Interface (GUI) with input controls. Additionally, I have developed a data communication between my ray-tracing infrastructure and commercial modeling and animation software. This addition extended the level of interactivity beyond the infrastructure. This infrastructure can also be extended to develop 3D dynamic environments to obtain any specific art style while providing global illumination effects. It has already been used to create a 3D interactive environment that emulates a given art work with reflections and refractions

    Platform-agnostic data visualization in Qt framework

    Get PDF
    Abstract. There are a variety of electronic devices making people’s lives easier. Many of these devices, such as smart TVs, coffee machines, and electric cars, have highresolution displays that visualize a rich interface for human-machine interaction. One common cross-platform framework for developing these visual interfaces is Qt. With its vast majority of modules, Qt offers a rich framework for developing different interfaces. Over the years, various modules have been added and removed from Qt. One such module is a module for visualizing data in 3D. The module was added when Qt relied heavily on the OpenGL rendering backend. Nowadays, in addition to OpenGL, Qt supports rendering 3D graphics on other popular graphics backends. Nevertheless, the data visualization module requires using OpenGL as the rendering backend. This thesis investigates the optimal way to change Qt’s module dedicated to 3D data visualization to work on different rendering backends. Additionally, a design for an extension to public C++ API for Qt’s 3D module, used in implementing the new module, is presented. The requirements for the new implementation of the data visualization module are derived from the current module. The implementation of the new version is evaluated with a tailor-made test application. The results indicate that the new module can improve performance on devices supporting modern graphics backend features.Alustariippumaton datavisualisointi Qt-ohjelmistorungossa. Tiivistelmä. On olemassa monia erilaisia ihmisten elämää helpottavia elektronisia laitteita. Monissa tällaisissa laitteissa, kuten älytelevisioissa, kahviautomaateissa ja sähköautoissa, on korkearesoluutioinen näyttö, joka visualisoi monipuolisen käyttöliittymän ihmisen ja koneen väliseen vuorovaikutukseen. Yksi yleinen alustariippumaton ohjelmistokehitysrunko näiden visuaalisten rajapintojen kehittämiseen on Qt. Qt tarjoaa paljon erilaisia moduleita ja runsaan kehyksen erilaisten käyttöliittymien kehitykseen. Vuosien varrella Qt-ympäristöön on lisätty ja siitä on poistettu erilaisia moduleja. Yksi tällainen moduli on tarkoitettu datan visualisointiin 3D-muodossa. Kyseinen moduli lisättiin, kun Qt luotti vielä voimakkaasti OpenGL-renderöintitaustajärjestelmään. Nykyään Qt tukee 3D-grafikan esittämistä muillakin suosituilla grafikkataustajärjestelmillä OpenGL:n lisäksi. Datan visualisointimoduli edellyttää kuitenkin OpenGL:n käyttämistä renderöintitaustajärjestelmänä. Tässä opinnäytetyössä tutkitaan optimaalista tapaa muuttaa Qt:n 3Dtietojen visualisoinnille tarkoitettua modulia toimimaan useammalla eri renderöintitaustajärjestelmällä. Lisäksi esitellään uuden modulin toteutuksessa käytetyn Qt:n 3D-modulin julkisen C++ API:n laajennuksen suunnitelma. Vaatimukset tiedon visualisointimodulin uudelle toteutukselle on johdettu nykyisestä moduulista. Uuden version toteutusta arvioidaan sitä varten kehitetyllä testisovelluksella. Tulokset osoittavat, että uusi moduuli voi parantaa suorituskykyä laitteissa, jotka tukevat nykyaikaisia renderöintitaustajärjestelmän ominaisuuksia
    corecore