264 research outputs found

    A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features

    Get PDF
    Cataloged from PDF version of article.Terrain rendering is a crucial part of many real-time applications. The easiest way to process and visualize terrain data in real time is to constrain the terrain model in several ways. This decreases the amount of data to be processed and the amount of processing power needed, but at the cost of expressivity and the ability to create complex terrains. The most popular terrain representation is a regular 2D grid, where the vertices are displaced in a third dimension by a displacement map, called a heightmap. This is the simplest way to represent terrain, and although it allows fast processing, it cannot model terrains with volumetric features. Volumetric approaches sample the 3D space by subdividing it into a 3D grid and represent the terrain as occupied voxels. They can represent volumetric features but they require computationally intensive algorithms for rendering, and their memory requirements are high. We propose a novel representation that combines the voxel and heightmap approaches, and is expressive enough to allow creating terrains with caves, overhangs, cliffs, and arches, and efficient enough to allow terrain editing, deformations, and rendering in real time

    Realistic reconstruction and rendering of detailed 3D scenarios from multiple data sources

    Get PDF
    During the last years, we have witnessed significant improvements in digital terrain modeling, mainly through photogrammetric techniques based on satellite and aerial photography, as well as laser scanning. These techniques allow the creation of Digital Elevation Models (DEM) and Digital Surface Models (DSM) that can be streamed over the network and explored through virtual globe applications like Google Earth or NASA WorldWind. The resolution of these 3D scenes has improved noticeably in the last years, reaching in some urban areas resolutions up to 1m or less for DEM and buildings, and less than 10 cm per pixel in the associated aerial imagery. However, in rural, forest or mountainous areas, the typical resolution for elevation datasets ranges between 5 and 30 meters, and typical resolution of corresponding aerial photographs ranges between 25 cm to 1 m. This current level of detail is only sufficient for aerial points of view, but as the viewpoint approaches the surface the terrain loses its realistic appearance. One approach to augment the detail on top of currently available datasets is adding synthetic details in a plausible manner, i.e. including elements that match the features perceived in the aerial view. By combining the real dataset with the instancing of models on the terrain and other procedural detail techniques, the effective resolution can potentially become arbitrary. There are several applications that do not need an exact reproduction of the real elements but would greatly benefit from plausibly enhanced terrain models: videogames and entertainment applications, visual impact assessment (e.g. how a new ski resort would look), virtual tourism, simulations, etc. In this thesis we propose new methods and tools to help the reconstruction and synthesis of high-resolution terrain scenes from currently available data sources, in order to achieve realistically looking ground-level views. In particular, we decided to focus on rural scenarios, mountains and forest areas. Our main goal is the combination of plausible synthetic elements and procedural detail with publicly available real data to create detailed 3D scenes from existing locations. Our research has focused on the following contributions: - An efficient pipeline for aerial imagery segmentation - Plausible terrain enhancement from high-resolution examples - Super-resolution of DEM by transferring details from the aerial photograph - Synthesis of arbitrary tree picture variations from a reduced set of photographs - Reconstruction of 3D tree models from a single image - A compact and efficient tree representation for real-time rendering of forest landscapesDurant els darrers anys, hem presenciat avenços significatius en el modelat digital de terrenys, principalment gràcies a tècniques fotogramètriques, basades en fotografia aèria o satèl·lit, i a escàners làser. Aquestes tècniques permeten crear Models Digitals d'Elevacions (DEM) i Models Digitals de Superfícies (DSM) que es poden retransmetre per la xarxa i ser explorats mitjançant aplicacions de globus virtuals com ara Google Earth o NASA WorldWind. La resolució d'aquestes escenes 3D ha millorat considerablement durant els darrers anys, arribant a algunes àrees urbanes a resolucions d'un metre o menys per al DEM i edificis, i fins a menys de 10 cm per píxel a les fotografies aèries associades. No obstant, en entorns rurals, boscos i zones muntanyoses, la resolució típica per a dades d'elevació es troba entre 5 i 30 metres, i per a les corresponents fotografies aèries varia entre 25 cm i 1m. Aquest nivell de detall només és suficient per a punts de vista aeris, però a mesura que ens apropem a la superfície el terreny perd tot el realisme. Una manera d'augmentar el detall dels conjunts de dades actuals és afegint a l'escena detalls sintètics de manera plausible, és a dir, incloure elements que encaixin amb les característiques que es perceben a la vista aèria. Així, combinant les dades reals amb instàncies de models sobre el terreny i altres tècniques de detall procedural, la resolució efectiva del model pot arribar a ser arbitrària. Hi ha diverses aplicacions per a les quals no cal una reproducció exacta dels elements reals, però que es beneficiarien de models de terreny augmentats de manera plausible: videojocs i aplicacions d'entreteniment, avaluació de l'impacte visual (per exemple, com es veuria una nova estació d'esquí), turisme virtual, simulacions, etc. En aquesta tesi, proposem nous mètodes i eines per ajudar a la reconstrucció i síntesi de terrenys en alta resolució partint de conjunts de dades disponibles públicament, per tal d'aconseguir vistes a nivell de terra realistes. En particular, hem decidit centrar-nos en escenes rurals, muntanyes i àrees boscoses. El nostre principal objectiu és la combinació d'elements sintètics plausibles i detall procedural amb dades reals disponibles públicament per tal de generar escenes 3D d'ubicacions existents. La nostra recerca s'ha centrat en les següents contribucions: - Un pipeline eficient per a segmentació d'imatges aèries - Millora plausible de models de terreny a partir d'exemples d’alta resolució - Super-resolució de models d'elevacions transferint-hi detalls de la fotografia aèria - Síntesis d'un nombre arbitrari de variacions d’imatges d’arbres a partir d'un conjunt reduït de fotografies - Reconstrucció de models 3D d'arbres a partir d'una única fotografia - Una representació compacta i eficient d'arbres per a navegació en temps real d'escenesPostprint (published version

    Real-time simulation and visualization of deformations on heightfields

    Get PDF
    Ankara : The Department of Computer Engineering and The Institute of Engineering and Science of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 117-121.The applications of computer graphics raise new expectations, such as realistic rendering, real-time dynamic scenes and physically correct simulations. The aim of this thesis is to investigate these problems on the height eld structure, an extended 2D model that can be processed e ciently by data-parallel architectures. This thesis presents methods for simulation of deformations on height eld as caused by triangular objects, physical simulation of objects interacting with height eld and advanced visualization of deformations. The height eld is stored in two di erent resolutions to support fast rendering and precise physical simulations as required. The methods are implemented as part of a large-scale height- eld management system, which applies additional level of detail and culling optimizations for the proposed methods and data structures. The solutions provide real-time interaction and recent graphics hardware (GPU) capabilities are utilized to achieve real-time results. All the methods described in this thesis are demonstrated by a sample application and performance characteristics and results are presented to support the conclusions.Yalçın, M AdilM.S

    Procedural Generation and Rendering of Large-Scale Open-World Environments

    Get PDF
    Open-world video games give players a large environment to explore along with increased freedom to navigate and manipulate that environment. These requirements pose several problems that must be addressed by a game\u27s graphics engine. Often there are a large number of visible objects, such as all of the trees in a forest, as well as objects comprised of large amounts of geometry, such as terrain. An open-world graphics engine must be able to render large environments at varying levels of detail and smoothly transition between detail levels to provide a believable experience. Often this involves finding a way to both store and generate the large amounts of geometry that represent the environment. In this thesis we present a system for generating and rendering large exterior environments, with a focus on terrain and vegetation. We use a region-based procedural generation algorithm to create environments of varying types. This algorithm produces content that can be rendered at multiple levels of detail. The terrain is rendered volumetrically to support caves, overhangs, and cliffs, but is also rendered using heightmaps to allow for large view distances. Vegetation is implemented using procedurally generated meshes and impostors. The volumetric terrain is editable in real time, which limits our ability to pre-generate or cache large amounts of geometry, and also limits the number of assumptions we can make with regard to visibility. We support a view distance of at least 25 miles in each direction, though distant objects are rendered at low resolution. The heightmap terrain used to achieve this view distance consists of over 360,000 triangles. Our system runs at 180 frames per second on commodity desktop hardware

    Unlimited object instancing in real-time

    Get PDF
    In this paper, we propose a novel approach to efficient rendering of an unlimited number of 3D objects in real-time. We present a rendering pipeline that is based on a new computer graphics programming paradigm implementing a holistic approach to the virtual scene definition. Using Signed Distance Functions (SDF) for a virtual scene representation, we managed to control the content and complexity of the virtual scene with the use of mathematical equations. In order to solve the limited hardware problem, especially the limited capacity of the GPU memory, we propose a scene element repository which extends the idea of the data based amplification. The content of the repository strongly depends on a 3D object visualization method. One of the most important requirements of the developed pipeline is the possibility to render 3D objects created by artists. In order to achieve that, the object visualization method uses Sparse Voxel Octree (SVO) ray casting. The developed rendering pipeline is fully compatible with the available SVO algorithms. We show how to avoid occlusion errors which can occur in the SDF and SVO integration single-pass rendering pipeline. Finally, in order to control the content and complexity of the virtual scenes in an unlimited way, we propose a collection of global operators applicable to the virtual scene distance function. Developed Unlimited Object Instancing rendering pipeline can be easily integrated with traditional visualization methods, e.g. the triangle rasterization. The only hardware requirement for our approach is the support for compute shaders or any GPGPU API

    Customizing Experiences for Mobile Virtual Reality

    Get PDF
    A criação manual de conteúdo para um jogo é um processo demorado e trabalhoso que requer um conjunto de habilidades diversi cado (normalmente designers, artistas e programadores) e a gestão de diferentes recursos (hardware e software especializados). Dado que o orçamento, tempo e recursos são frequentemente muito limitados, os projetos poderiam bene ciar de uma solução que permitisse poupar e investir noutros aspectos do desenvolvimento. No contexto desta tese, abordamos este desa o sugerindo a criação de pacotes especí cos para a geração de conteúdo per sonalizável, focados em aplicações de Realidade Virtual (RV) móveis. Esta abordagem divide o problema numa solução com duas facetas: em primeiro lugar, a Geração Procedural de Conteúdo, alcançada através de métodos convencionais e pela utilização inovadora de Grandes Modelos de Lin guagem (normalmente conhecidos por Large Language Models). Em segundo lugar, a Co-Criação de Conteúdo, que enfatiza o desenvolvimento colaborativo de conteúdo. Adicionalmente, dado que este trabalho se foca na compatibilidade com RV móvel, as limitações de hardware associadas a capacetes de RV autónomos (standalone VR Headsets) e formas de as ultrapassar são também abordadas. O conteúdo será gerado utilizando métodos actuais em geração procedural e facilitando a co-criação de conteúdo pelo utilizador. A utilização de ambas estas abordagens resulta em ambi entes, objectivos e conteúdo geral mais re-jogáveis com muito menos desenho. Esta abordagem está actualmente a ser aplicada no desenvolvimento de duas aplicações de RV distintas. A primeira, AViR, destina-se a oferecer apoio psicológico a indivíduos após a perda de uma gravidez. A se gunda, EmotionalVRSystem, visa medir as variações nas respostas emocionais dos participantes induzidas por alterações no ambiente, utilizando tecnologia EEG para leituras precisas
    • …
    corecore