53 research outputs found

    Standardized Virtual Reality, Are We There Yet?

    Full text link

    A heterogeneous data-based proposal for procedural 3D cities visualization and generalization

    Get PDF
    Ce projet de thèse est né d'un projet de collaboration entre l'équipe de recherche VORTEX/ Objets visuels: de la réalité à l'expression (maintenant REVA: Réel Expression Vie Artificielle) à l'IRIT : Institut de Recherche en Informatique de Toulouse d'une part et de professionnels de l'éducation, entreprises et entités publiques d'autre part. Le projet de collaboration SCOLA est essentiellement une plate-forme d'apprentissage en ligne basée sur l'utilisation des jeux sérieux dans les écoles. Il aide les utilisateurs à acquérir et à repérer des compétences prédéfinies. Cette plate-forme offre aux enseignants un nouvel outil flexible qui crée des scénarios liés à la pédagogie et personnalise les dossiers des élèves. Plusieurs contributions ont été attribuées à l'IRIT. L'une d'elles consiste à suggérer une solution pour la création automatique d'environnements 3D, à intégrer au scénario du jeu. Cette solution vise à empêcher les infographistes 3D de modéliser manuellement des environnements 3D détaillés et volumineux, ce qui peut être très coûteux et prendre beaucoup de temps. Diverses applications et prototypes ont été développés pour permettre à l'utilisateur de généraliser et de visualiser son propre monde virtuel principalement à partir d'un ensemble de règles. Par conséquent, il n'existe pas de schéma de représentation unique dans le monde virtuel en raison de l'hétérogénéité et de la diversité de la conception de contenus 3D, en particulier des modèles de ville. Cette contrainte nous a amené à nous appuyer largement dans notre projet sur de vraies données urbaines 3D au lieu de données personnalisées prédéfinies par le concepteur de jeu. Les progrès réalisés en infographie, les capacités de calcul élevées et les technologies Web ont largement révolutionné les techniques de reconstruction et de visualisation des données. Ces techniques sont appliquées dans divers domaines, en commençant par les jeux vidéo, les simulations et en terminant par les films qui utilisent des espaces générés de manière procédurale et des animations de personnages. Bien que les jeux informatiques modernes n'aient pas les mêmes restrictions matérielles et de mémoire que les anciens jeux, la génération procédurale est fréquemment utilisée pour créer des jeux, des cartes, des niveaux, des personnages ou d'autres facettes aléatoires uniques sur chaque jeu. Actuellement, la tendance est déplacée vers les SIG: Systèmes d'Information Géographiques pour créer des mondes urbains, en particulier après leur mise en œuvre réussie dans le monde entier afin de prendre en charge de nombreuses domaines d'applications. Les SIG sont plus particulièrement dédiés à des applications telles que la simulation, la gestion des catastrophes et la planification urbaine, avec une grande utilisation plus ou moins limitée dans les jeux, par exemple le jeu "Minecraft", dont la dernière version propose une cartographie utilisant des villes du monde réel Geodata in Minecraft. L'utilisation des données urbaines existantes devient de plus en plus répandue dans les applications cartographiques pour deux raisons principales: premièrement, elle permet de comprendre le contenu spatial d'objets urbains de manière plus logique et, deuxièmement, elle fournit une plate-forme commune pour intégrer des informations au niveau de la ville provenant de différents environnements ou ressources et les rendre accessibles aux utilisateurs. Un modèle de ville virtuelle en 3D est une représentation numérique de l'espace urbain qui décrit les propriétés géométriques, topologiques, sémantiques et d'apparence de ses composants. En général, un MV3D\footnote{Modèle de Ville en 3D} sert de plate-forme d'intégration pour plusieurs facettes d'un espace d'informations urbain, comme l'a souligné "Batty": "En bref, les nouveaux modèles ne sont pas simplement la géométrie numérique des modèles traditionnels, mais des bases de données à grande échelle pouvant être visualisées en 3D. En tant que tels, ils représentent déjà un moyen de fusionner des données symboliques ou thématiques plus abstraites, même des modèles symboliques, dans ce mode de représentation".This thesis project was born from a collaborative project between the research team VORTEX / Visual objects: from reality to expression (now REVA: Real Expression Artificial Life) at IRIT: Institute of Research in Computer Science Toulouse on the one hand and education professionals, companies and public entities on the other.The SCOLA collaborative project is essentially an online learning platform based on the use of serious games in schools. It helps users to acquire and track predefined skills. This platform provides teachers with a new flexible tool that creates pedagogical scenarios and personalizes student records. Several contributions have been attributed to IRIT. One of these is to suggest a solution for the automatic creation of 3D environments, to integrate into the game scenario. This solution aims to prevent 3D graphic designers from manually modeling detailed and large 3D environments, which can be very expensive and take a lot of time. Various applications and prototypes have been developed to allow the user to generalize and visualize their own virtual world primarily from a set of rules. Therefore, there is no single representation scheme in the virtual world due to the heterogeneity and diversity of 3D content design, especially city models. This constraint has led us to rely heavily on our project on real 3D urban data instead of custom data predefined by the game designer. Advances in computer graphics, high computing capabilities, and Web technologies have revolutionized data reconstruction and visualization techniques. These techniques are applied in a variety of areas, starting with video games, simulations, and ending with movies that use procedurally generated spaces and character animations. Although modern computer games do not have the same hardware and memory restrictions as older games, procedural generation is frequently used to create unique games, cards, levels, characters, or other random facets on each. Currently, the trend is shifting towards GIS : Geographical Information Systems to create urban worlds, especially after their successful implementation around the world to support many areas of applications. GIS are more specifically dedicated to applications such as simulation, disaster management and urban planning, with a great use more or less limited in games, for example the game "Minecraft", the latest version offers a map using real world cities Geodata in Minecraft. The use of existing urban data is becoming more and more widespread in cartographic applications for two main reasons: first, it makes it possible to understand the spatial content of urban objects in a more logical way and, secondly, it provides a common platform to integrate city-level information from different environments or resources and make them available to users. A 3D virtual city model is a digital representation of urban space that describes the geometric, topological, semantic, and appearance properties of its components. In general, an MV3D: 3D City Model serves as an integration platform for many facets of an urban information space, as "Batty" pointed out: "In short, the new models are not just the digital geometry of traditional models, but large-scale databases that can be visualized in 3D. As such, they already represent a way to merge more abstract symbolic or thematic data, even symbolic patterns, into this mode of representation"

    Fusing Multimedia Data Into Dynamic Virtual Environments

    Get PDF
    In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotemporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identified several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education

    Adaptive 3D web-based environment for heterogeneous volume objects.

    Get PDF
    The Internet was growing fast on the last decade. Interaction and visualisation became an essential feature online. The demand for online modelling and rendering in a real-time, adaptive and interactive manner exceeded the growth and development of the hardware resources including computational power and memories. Building up and accessing an instant 3D Web-based and plugin-free platform started to be a must in order to generate 3D volumes. Modelling and rendering complicated heterogeneous volumes using online applications requires good Internet bandwidth and high computational power. A large number of 3D modelling tools designed to create complicated models in an interactive manner are now available online, the problem of using such tools is that the user needs to acquire a certain level of modelling knowledge In this work, we identify the problem, introduce the theoretical background and discuss the theory about Web-based modelling and rendering, including client- server approach, scenario optimization by solving constraint satisfaction problem, and complexity analysis. We address the challenges of designing, implementing and testing an online, Web-based, instant 3D modelling and rendering environment and we discuss some of its characteristics including adaptivity, platform independence, interactivity, and easy-to-use after presenting the theoretical part of implementing such an environment. We also introduce platform-independent modelling and rendering environment for complicated heterogeneous volumes with colour attributes based on client- server architecture. The work includes analysis and implementation for different rendering approaches suitable for different kind of users. We also discuss the performance of the proposed environment by comparing the rendering approaches. As an additional feature of our modelling system, we discuss aspects of securing the model transferring between client and the server

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    A Survey of GPU-Based Large-Scale Volume Visualization

    Get PDF
    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent outputsensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.Engineering and Applied Science

    Active modelling of virtual humans

    Get PDF
    This thesis provides a complete framework that enables the creation of photorealistic 3D human models in real-world environments. The approach allows a non-expert user to use any digital capture device to obtain four images of an individual and create a personalised 3D model, for multimedia applications. To achieve this, it is necessary that the system is automatic and that the reconstruction process is flexible to account for information that is not available or incorrectly captured. In this approach the individual is automatically extracted from the environment using constrained active B-spline templates that are scaled and automatically initialised using only image information. These templates incorporate the energy minimising framework for Active Contour Models, providing a suitable and flexible method to deal with the adjustments in pose an individual can adopt. The final states of the templates describe the individual’s shape. The contours in each view are combined to form a 3D B-spline surface that characterises an individual’s maximal silhouette equivalent. The surface provides a mould that contains sufficient information to allow for the active deformation of an underlying generic human model. This modelling approach is performed using a novel technique that evolves active-meshes to 3D for deforming the underlying human model, while adaptively constraining it to preserve its existing structure. The active-mesh approach incorporates internal constraints that maintain the structural relationship of the vertices of the human model, while external forces deform the model congruous to the 3D surface mould. The strength of the internal constraints can be reduced to allow the model to adopt the exact shape of the bounding volume or strengthened to preserve the internal structure, particularly in areas of high detail. This novel implementation provides a uniform framework that can be simply and automatically applied to the entire human model

    3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets

    Get PDF
    This research deals with the issues concerning the processing, managing, representation for further dissemination of the big amount of 3D data today achievable and storable with the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused on the optimization process applied to 3D photogrammetric data of Cultural Heritage assets. Modern Geomatic techniques enable the acquisition and storage of a big amount of data, with high metric and radiometric accuracy and precision, also in the very close range field, and to process very detailed 3D textured models. Nowadays, the photogrammetric pipeline has well-established potentialities and it is considered one of the principal technique to produce, at low cost, detailed 3D textured models. The potentialities offered by high resolution and textured 3D models is today well-known and such representations are a powerful tool for many multidisciplinary purposes, at different scales and resolutions, from documentation, conservation and restoration to visualization and education. For example, their sub-millimetric precision makes them suitable for scientific studies applied to the geometry and materials (i.e. for structural and static tests, for planning restoration activities or for historical sources); their high fidelity to the real object and their navigability makes them optimal for web-based visualization and dissemination applications. Thanks to the improvement made in new visualization standard, they can be easily used as visualization interface linking different kinds of information in a highly intuitive way. Furthermore, many museums look today for more interactive exhibitions that may increase the visitors’ emotions and many recent applications make use of 3D contents (i.e. in virtual or augmented reality applications and through virtual museums). What all of these applications have to deal with concerns the issue deriving from the difficult of managing the big amount of data that have to be represented and navigated. Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them difficult to be handled by common and portable devices, published on the internet or managed in real time applications. Even though recent advances produce more and more sophisticated and capable hardware and internet standards, empowering the ability to easily handle, visualize and share such contents, other researches aim at define a common pipeline for the generation and optimization of 3D models with a reduced number of polygons, however able to satisfy detailed radiometric and geometric requests. iii This thesis is inserted in this scenario and focuses on the 3D modeling process of photogrammetric data aimed at their easy sharing and visualization. In particular, this research tested a 3D models optimization, a process which aims at the generation of Low Polygons models, with very low byte file size, processed starting from the data of High Poly ones, that nevertheless offer a level of detail comparable to the original models. To do this, several tools borrowed from the game industry and game engine have been used. For this test, three case studies have been chosen, a modern sculpture of a contemporary Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont- Italy). All the test cases have been surveyed by means of a close range photogrammetric acquisition and three high detailed 3D models have been generated by means of a Structure from Motion and image matching pipeline. On the final High Poly models generated, different optimization and decimation tools have been tested with the final aim to evaluate the quality of the information that can be extracted by the final optimized models, in comparison to those of the original High Polygon one. This study showed how tools borrowed from the Computer Graphic offer great potentialities also in the Cultural Heritage field. This application, in fact, may meet the needs of multipurpose and multiscale studies, using different levels of optimization, and this procedure could be applied to different kind of objects, with a variety of different sizes and shapes, also on multiscale and multisensor data, such as buildings, architectural complexes, data from UAV surveys and so on
    corecore