12 research outputs found

    e-Archeo. A pilot national project to valorize Italian archaeological parks through digital and virtual reality technologies

    Get PDF
    Commissioned to ALES spa by the Ministry of Culture (MiC), the e-Archeo project was born with the intention of enhancing and promoting knowledge of some Italian archaeological sites with a considerable narrative potential that has not yet been fully expressed. The main principle that guided the choice of the sites and the contents was of illustrating the various cultures and types of settlements present in the Italian territory. Eight sites were chosen, spread across the national territory from north to south, founded by Etruscans, Greeks, Phoenicians, natives and Romans. e-Archeo has developed multimedia, integrated and multi-channel solutions for various uses and types of audiences, adopting both scientific and narrative and emotional languages. Particular attention was paid to multimedia accessibility, technological sustainability and open science. The e-Archeo project was born from a strong synergy between public entities, research bodies and private industries thanks to the collaboration of MiC and ALES with the CNR ISPC, 10 Italian Universities, 12 Creative Industries and the Italian National Television (RAI). This exceptional and unusual condition made it possible to realise all the project’s high-quality contents and several outputs in only one and a half years

    Dynamic Viewport-Adaptive Rendering in Distributed Interactive VR Streaming: Optimizing viewport resolution under latency and viewport orientation constraints

    Get PDF
    In streaming Virtual Reality to thin clients one of the main concerns is the massive bandwidth requirement of VR video. Additionally, streaming VR requires a low latency of less than 25ms to avoid cybersickness and provide a high Quality of Experience. Since a user is only viewing a portion of the VR content sphere at a time, researchers have leveraged this to increase the relative quality of the user viewport compared to peripheral areas. This way bandwidth can be saved, since the peripheral areas are streamed at a lower bitrate. In streaming 360°360\degree video this has resulted in the common strategy of tiling a video frame and delivering different quality tiles based on current available bandwidth and the user's viewport location. However, such an approach is not suitable for real-time Interactive VR streaming. Furthermore, streaming only the user's viewport results in the user observing unrendered or very low-quality areas at higher latency values. In order to provide a high viewport quality in Interactive VR, we propose the novel method of Dynamic Viewport-Adaptive Rendering. By rotating the frontal direction of the content sphere with the user gaze, we can dynamically render more or less of the peripheral area and thus increase the proportional resolution of the frontal direction in the video frame. We show that DVAR can successfully compensate for different system RTT values while offering a significantly higher viewport resolution than other implementations. We further discuss how DVAR can be easily extended by other optimization methods and discuss how we can incorporate head movement prediction to allow DVAR to optimally determine the amount of peripheral area to render, thus providing an optimal viewport resolution given the system constraints

    Compressed Animated Light Fields with Real-time View-dependent Reconstruction

    Get PDF
    We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. A view-dependent decompression algorithm decodes only the parts of the compressed video streams that are visible to users. We detail a real-time rendering algorithm using multi-view ray casting, with a variant that can handle strong view dependent effects such as mirror surfaces and glass. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance

    Silicon Stoke 2023: Developing Film and TV Content in North Staffordshire

    Get PDF
    A report, commissioned by Stoke-on-Trent city council to identify areas for growth and approaches for levelling up the screening industries in North Staffordshire

    Reoriented Illustration: Towards the Networked Image

    Get PDF
    In recent years there has emerged an increasing theoretical and contextual impetus from within the discipline of illustration that would seek to define the practice by authorial approaches to the production and distribution of illustrated content. The priority of this investigation is the attempt to imagine a theoretical landscape or environment in which an ‘authorial turn’ within the discipline might emerge and anchor itself to strategies outside of persisting colloquial or industrial notions of illustration practice. Specifically, this paper aims to tie such thinking to existing practices and concepts relevant to the contemporary construction, distribution and exchange of networked images

    The New King of the Entertainment Industry? The Unlikely Story of Video Games and the First Amendment

    Get PDF

    BlickpunktabhÀngige Computergraphik

    Get PDF
    Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die RealitĂ€t hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit fĂŒhrt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhĂ€ngige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trĂ€gt zu vier Bereichen blickpunktabhĂ€ngiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle QualitĂ€t von Videos fĂŒr den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit UnterstĂŒtzung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle RealitĂ€t (VR) und Erweiterte RealitĂ€t (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhĂ€ngigen Rendern nutzt. Die QualitĂ€t des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells fĂŒr jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand fĂŒr das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhĂ€ngige Algorithmen die DarstellungsqualitĂ€t von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender BildqualitĂ€t der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lĂ€sst

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
    corecore