1,087 research outputs found

    Transmission adaptative de modĂšles 3D massifs

    Get PDF
    Avec les progrĂšs de l'Ă©dition de modĂšles 3D et des techniques de reconstruction 3D, de plus en plus de modĂšles 3D sont disponibles et leur qualitĂ© augmente. De plus, le support de la visualisation 3D sur le web s'est standardisĂ© ces derniĂšres annĂ©es. Un dĂ©fi majeur est donc de transmettre des modĂšles massifs Ă  distance et de permettre aux utilisateurs de visualiser et de naviguer dans ces environnements virtuels. Cette thĂšse porte sur la transmission et l'interaction de contenus 3D et propose trois contributions majeures. Tout d'abord, nous dĂ©veloppons une interface de navigation dans une scĂšne 3D avec des signets -- de petits objets virtuels ajoutĂ©s Ă  la scĂšne sur lesquels l'utilisateur peut cliquer pour atteindre facilement un emplacement recommandĂ©. Nous dĂ©crivons une Ă©tude d'utilisateurs oĂč les participants naviguent dans des scĂšnes 3D avec ou sans signets. Nous montrons que les utilisateurs naviguent (et accomplissent une tĂąche donnĂ©e) plus rapidement en utilisant des signets. Cependant, cette navigation plus rapide a un inconvĂ©nient sur les performances de la transmission : un utilisateur qui se dĂ©place plus rapidement dans une scĂšne a besoin de capacitĂ©s de transmission plus Ă©levĂ©es afin de bĂ©nĂ©ficier de la mĂȘme qualitĂ© de service. Cet inconvĂ©nient peut ĂȘtre attĂ©nuĂ© par le fait que les positions des signets sont connues Ă  l'avance : en ordonnant les faces du modĂšle 3D en fonction de leur visibilitĂ© depuis un signet, on optimise la transmission et donc, on diminue la latence lorsque les utilisateurs cliquent sur les signets. DeuxiĂšmement, nous proposons une adaptation du standard de transmission DASH (Dynamic Adaptive Streaming over HTTP), trĂšs utilisĂ© en vidĂ©o, Ă  la transmission de maillages texturĂ©s 3D. Pour ce faire, nous divisons la scĂšne en un arbre k-d oĂč chaque cellule correspond Ă  un adaptation set DASH. Chaque cellule est en outre divisĂ©e en segments DASH d'un nombre fixe de faces, regroupant des faces de surfaces comparables. Chaque texture est indexĂ©e dans son propre adaptation set Ă  diffĂ©rentes rĂ©solutions. Toutes les mĂ©tadonnĂ©es (les cellules de l'arbre k-d, les rĂ©solutions des textures, etc.) sont rĂ©fĂ©rencĂ©es dans un fichier XML utilisĂ© par DASH pour indexer le contenu: le MPD (Media Presentation Description). Ainsi, notre framework hĂ©rite de la scalabilitĂ© offerte par DASH. Nous proposons ensuite des algorithmes capables d'Ă©valuer l'utilitĂ© de chaque segment de donnĂ©es en fonction du point de vue du client, et des politiques de transmission qui dĂ©cident des segments Ă  tĂ©lĂ©charger. Enfin, nous Ă©tudions la mise en place de la transmission et de la navigation 3D sur les appareils mobiles. Nous intĂ©grons des signets dans notre version 3D de DASH et proposons une version amĂ©liorĂ©e de notre client DASH qui bĂ©nĂ©ficie des signets. Une Ă©tude sur les utilisateurs montre qu'avec notre politique de chargement adaptĂ©e aux signets, les signets sont plus susceptibles d'ĂȘtre cliquĂ©s, ce qui amĂ©liore Ă  la fois la qualitĂ© de service et la qualitĂ© d'expĂ©rience des utilisateur

    Interactive Video Game Content Authoring using Procedural Methods

    Get PDF
    This thesis explores avenues for improving the quality and detail of game graphics, in the context of constraints that are common to most game development studios. The research begins by identifying two dominant constraints; limitations in the capacity of target gaming hardware/platforms, and processes that hinder the productivity of game art/content creation. From these constraints, themes were derived which directed the research‟s focus. These include the use of algorithmic or „procedural‟ methods in the creation of graphics content for games, and the use of an „interactive‟ content creation strategy, to better facilitate artist production workflow. Interactive workflow represents an emerging paradigm shift in content creation processes used by the industry, which directly integrates game rendering technology into the content authoring process. The primary motivation for this is to provide „high frequency‟ visual feedback that enables artists to see games content in context, during the authoring process. By merging these themes, this research develops a production strategy that takes advantage of „high frequency feedback‟ in an interactive workflow, to directly expose procedural methods to artists‟, for use in the content creation process. Procedural methods have a characteristically small „memory footprint‟ and are capable of generating massive volumes of data. Their small „size to data volume‟ ratio makes them particularly well suited for use in game rendering situations, where capacity constraints are an issue. In addition, an interactive authoring environment is well suited to the task of setting parameters for procedural methods, reducing a major barrier to their acceptance by artists. An interactive content authoring environment was developed during this research. Two algorithms were designed and implemented. These algorithms provide artists‟ with abstract mechanisms which accelerate common game content development processes; namely object placement in game environments, and the delivery of variation between similar game objects. In keeping with the theme of this research, the core functionality of these algorithms is delivered via procedural methods. Through this, production overhead that is associated with these content development processes is essentially offloaded from artists onto the processing capability of modern gaming hardware. This research shows how procedurally based content authoring algorithms not only harmonize with the issues of hardware capacity constraints, but also make the authoring of larger and more detailed volumes of games content more feasible in the game production process. Algorithms and ideas developed during this research demonstrate the use of procedurally based, interactive content creation, towards improving detail and complexity in the graphics of games

    Distributed OpenGL Rendering in Network Bandwidth Constrained Environments

    Get PDF
    Display walls made from multiple monitors are often used when very high resolution images are required. To utilise a display wall, rendering information must be sent to each computer that the monitors are connect to. The network is often the performance bottleneck for demanding applications, like high performance 3D animations. This paper introduces ClusterGL; a distribution library for OpenGL applications. ClusterGL reduces network traffic by using compression, frame differencing and multi-cast. Existing applications can use ClusterGL without recompilation. Benchmarks show that, for most applications, ClusterGL outperforms other systems that support unmodified OpenGL applications including Chromium and BroadcastGL. The difference is larger for more complex scene geometries and when there are more display machines. For example, when rendering OpenArena, ClusterGL outperforms Chromium by over 300% on the Symphony display wall at The University of Waikato, New Zealand. This display has 20 monitors supported by five computers connected by gigabit Ethernet, with a full resolution of over 35 megapixels. ClusterGL is freely available via Google Code

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively

    Accelerated volumetric reconstruction from uncalibrated camera views

    Get PDF
    While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields. This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized ones appear photorealistic, as well as reducing the time spent on model creation. In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware

    Network streaming and compression for mixed reality tele-immersion

    Get PDF
    Bulterman, D.C.A. [Promotor]Cesar, P.S. [Copromotor

    From Capture to Display: A Survey on Volumetric Video

    Full text link
    Volumetric video, which offers immersive viewing experiences, is gaining increasing prominence. With its six degrees of freedom, it provides viewers with greater immersion and interactivity compared to traditional videos. Despite their potential, volumetric video services poses significant challenges. This survey conducts a comprehensive review of the existing literature on volumetric video. We firstly provide a general framework of volumetric video services, followed by a discussion on prerequisites for volumetric video, encompassing representations, open datasets, and quality assessment metrics. Then we delve into the current methodologies for each stage of the volumetric video service pipeline, detailing capturing, compression, transmission, rendering, and display techniques. Lastly, we explore various applications enabled by this pioneering technology and we present an array of research challenges and opportunities in the domain of volumetric video services. This survey aspires to provide a holistic understanding of this burgeoning field and shed light on potential future research trajectories, aiming to bring the vision of volumetric video to fruition.Comment: Submitte

    Study of Compression Statistics and Prediction of Rate-Distortion Curves for Video Texture

    Get PDF
    Encoding textural content remains a challenge for current standardised video codecs. It is therefore beneficial to understand video textures in terms of both their spatio-temporal characteristics and their encoding statistics in order to optimize encoding performance. In this paper, we analyse the spatio-temporal features and statistics of video textures, explore the rate-quality performance of different texture types and investigate models to mathematically describe them. For all considered theoretical models, we employ machine-learning regression to predict the rate-quality curves based solely on selected spatio-temporal features extracted from uncompressed content. All experiments were performed on homogeneous video textures to ensure validity of the observations. The results of the regression indicate that using an exponential model we can more accurately predict the expected rate-quality curve (with a mean Bj{\o}ntegaard Delta rate of 0.46% over the considered dataset) while maintaining a low relative complexity. This is expected to be adopted by in the loop processes for faster encoding decisions such as rate-distortion optimisation, adaptive quantization, partitioning, etc.Comment: 17 page
    • 

    corecore