683 research outputs found

    Immersive and non immersive 3D virtual city: decision support tool for urban sustainability

    Get PDF
    Sustainable urban planning decisions must not only consider the physical structure of the urban development but the economic, social and environmental factors. Due to the prolonged times scales of major urban development projects the current and future impacts of any decision made must be fully understood. Many key project decisions are made early in the decision making process with decision makers later seeking agreement for proposals once the key decisions have already been made, leaving many stakeholders, especially the general public, feeling marginalised by the process. Many decision support tools have been developed to aid in the decision making process, however many of these are expert orientated, fail to fully address spatial and temporal issues and do not reflect the interconnectivity of the separate domains and their indicators. This paper outlines a platform that combines computer game techniques, modelling of economic, social and environmental indicators to provide an interface that presents a 3D interactive virtual city with sustainability information overlain. Creating a virtual 3D urban area using the latest video game techniques ensures: real-time rendering of the 3D graphics; exploitation of novel techniques of how complex multivariate data is presented to the user; immersion in the 3D urban development, via first person navigation, exploration and manipulation of the environment with consequences updated in real-time. The use of visualisation techniques begins to remove sustainability assessment’s reliance on the existing expert systems which are largely inaccessible to many of the stakeholder groups, especially the general public

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    Crepuscular Rays for Tumor Accessibility Planning

    Get PDF

    A Sparse Voxel Octree-Based Framework for Computing Solar Radiation Using 3D City Models

    Get PDF
    abstract: An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long been used as a spatial data representation, but practical applications of the voxel representation have been limited compared with rasters in traditional two-dimensional (2D) geographic information systems (GIS). We propose to use sparse voxel octree (SVO) as a data representation to extend the GRASS GIS r.sun solar radiation model from 2D to 3D. The GRASS GIS r.sun model is nested in an SVO-based computing framework. The presented 3D solar radiation computing framework was applied to 3D building groups of different geometric complexities to demonstrate its efficiency and scalability. We presented a method to explicitly compute diffuse shading losses in r.sun, and found that diffuse shading losses can reduce up to 10% of the annual global radiation under clear sky conditions. Hence, diffuse shading losses are of significant importance especially in complex urban environments

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features

    Get PDF
    Cataloged from PDF version of article.Terrain rendering is a crucial part of many real-time applications. The easiest way to process and visualize terrain data in real time is to constrain the terrain model in several ways. This decreases the amount of data to be processed and the amount of processing power needed, but at the cost of expressivity and the ability to create complex terrains. The most popular terrain representation is a regular 2D grid, where the vertices are displaced in a third dimension by a displacement map, called a heightmap. This is the simplest way to represent terrain, and although it allows fast processing, it cannot model terrains with volumetric features. Volumetric approaches sample the 3D space by subdividing it into a 3D grid and represent the terrain as occupied voxels. They can represent volumetric features but they require computationally intensive algorithms for rendering, and their memory requirements are high. We propose a novel representation that combines the voxel and heightmap approaches, and is expressive enough to allow creating terrains with caves, overhangs, cliffs, and arches, and efficient enough to allow terrain editing, deformations, and rendering in real time

    Galactica, a digital planetarium that explores the solar system and the milky way

    Get PDF
    This paper describes a new Digital Planetarium system that allows interactive visualization of astrophysical data and phenomena in an immersive virtual reality (VR) setting. Taking advantage of the Cave Hollowspace at Lousal infrastructure, we have created a large-scale immersive VR experience, by adopting its Openscenegraph (OSG) based VR middleware, as a basis for our development. Since our goal was to create an underlying system that could scale to arbitrary large astrophysical datasets, we have splitted our architecture in offline and runtime subsystems, where the former is responsible for parsing the available data sources into a SQL database, which will then be used by the runtime system to generate the entire VR scene graph environment, for the interactive user experience. Real-time computer graphics requirements lead us to adopt some visualization optimization techniques, namely, GPU calculation of textured billboards representing stars, view-frustum culling with octree organization of scene objects and object occlusion culling, to keep the user experience within the interactivity limits. We have built a storyboard (the “Galatica” storyboard), which describes and narrates a visual and aural user experience, while navigating through the Solar System and the Milky Way, and which was used to measure and evaluate the performance of our visualization acceleration algorithms. The system was tested with an available dataset of the complete Milky Way (including the solar system), featuring 100.639 textured billboards representing stars and additional 104.328 polygons, representing constellations and planets of the solar system. We have computed the frame rate, GPU traverse time, Cull traverse time and Draw traverse time for three visualization conditions: (A) using standard OSG view frustum culling technique; (B) using view frustum culling with and our octree organizing the scene’s objects; (C) using view frustum culling with our octree organizing the scene’s objects and our occlusion culling algorithm. We have generally concluded that our octree organization and octree plus object culling techniques out-performs the standard OSG view frustum culling, when around half or less than half of the dataset is in view of the virtual camera.info:eu-repo/semantics/acceptedVersio

    Doctor of Philosophy in Computing

    Get PDF
    dissertationThe aim of direct volume rendering is to facilitate exploration and understanding of three-dimensional scalar fields referred to as volume datasets. Improving understanding is done by improving depth perception, whereas facilitating exploration is done by speeding up volume rendering. In this dissertation, improving both depth perception and rendering speed is considered. The impact of depth of field (DoF) on depth perception in direct volume rendering is evaluated by conducting a user study in which the test subjects had to choose which of two features, located at different depths, appeared to be in front in a volume-rendered image. Whereas DoF was expected to improve perception in all cases, the user study revealed that if used on the back feature, DoF reduced depth perception, whereas it produced a marked improvement when used on the front feature. We then worked on improving the speed of volume rendering on distributed memory machines. Distributed volume rendering has three stages: loading, rendering, and compositing. In this dissertation, the focus is on image compositing, more specifically, trying to optimize communication in image compositing algorithms. For that, we have developed the Task Overlapped Direct Send Tree image compositing algorithm, which works on both CPU- and GPU-accelerated supercomputers, which focuses on communication avoidance and overlapping communication with computation; the Dynamically Scheduled Region-Based image compositing algorithm that uses spatial and temporal awareness to efficiently schedule communication among compositing nodes, and a rendering and compositing pipeline that allows both image compositing and rendering to be done on GPUs of GPU-accelerated supercomputers. We tested these on CPU- and GPU-accelerated supercomputers and explain how these improvements allow us to obtain better performance than image compositing algorithms that focus on load-balancing and algorithms that have no spatial and temporal awareness of the rendering and compositing stages
    • …
    corecore