1,074 research outputs found

    Graphics Insertions into Real Video for Market Research

    Get PDF

    Negotiating Reality

    Get PDF
    Our understanding of research through design is demonstrated by a close examination of the methods used in the project lifeClipper2. This design research project investigates the applicability of immersive outdoor Augmented Reality (AR). lifeClipper2 offers an audiovisual walking experience in a virtually extended public space and focuses on audiovisual perception as well as on the development of the appropriate technology. The project involves contributions of partners from different fields of research. Thus, lifeClipper2 is able to test the potential of AR for visualizing architecture and archaeological information and to challenge our understanding of perception and interaction. Using examples from our research, the paper reflects on how scenario design contributes to the production of design knowledge and explores the possibilities and variations of AR. Finally, the paper drafts our approach to design research. The three tenets of our work are: the use of scenarios as a tool of interdisciplinary research, the experimental exploration of media and the intention to make design knowledge explicit. Keywords: augmented reality; locative media; hybrid environment; immersion; perception; experience design; research through design; scenario design</p

    A Game Engine as a Generic Platform for Real-Time Previz-on-Set in Cinema Visual Effects

    No full text
    International audienceWe present a complete framework designed for film production requiring live (pre) visualization. This framework is based on a famous game engine, Unity. Actually, game engines possess many advantages that can be directly exploited in real-time pre-vizualization, where real and virtual worlds have to be mixed. In the work presented here, all the steps are performed in Unity: from acquisition to rendering. To perform real-time compositing that takes into account occlusions that occur between real and virtual elements as well as to manage physical interactions of real characters towards virtual elements, we use a low resolution depth map sensor coupled to a high resolution film camera. The goal of our system is to give the film director's creativity a flexible and powerful tool on stage, long before post-production

    Blood in the Corridor

    Get PDF
    This article examines the significance of the digital aesthetic of violence in the uniquely contemporary action-film “hero run” shoot-out sequences. By using the case studies of Kick-Ass (2010) and Wanted (2008), the article focuses on how the particular stylistic tendencies of these sequences display a link between the onscreen, digital-enabled mastery of the shooter with the offscreen digital mastery of the visual-effects artist

    Re-animating Climate Change: Abstract Temporalities in Augmented Reality

    Get PDF
    This article explores how animation and augmented reality (AR) can create compression and re-distribution of moving image to convey the temporal scales at play in climate change. Animation inherently fosters experimentation with the expression and understanding of time. AR combines the temporal quality of animation with the physical environment, creating a hybrid space of moving image, technology and physical objects that operate on different time scales. This presents an opportunity to engage imaginatively with aspects of climate change that science communication research has identified as problematic to comprehend, such as the immense timescale on which it occurs. My practice-based research explores techniques, including limited animation, AR image targets and layering of two-dimensional moving image in physical space, to demonstrate how these ideas can be implemented both in a gallery and in the natural environment

    The Virtual Worlds of Cinema Visual Effects, Simulation, and the Aesthetics of Cinematic Immersion

    Get PDF
    This thesis develops a phenomenology of immersive cinematic spectatorship. During an immersive experience in the cinema, the images, sounds, events, emotions, and characters that form a fictional diegesis become so compelling that our conscious experience of the real world is displaced by a virtual world. Theorists and audiences have long recognized cinema’s ability to momentarily substitute for the lived experience of reality, but it remains an under-theorized aspect of cinematic spectatorship. The first aim of this thesis is therefore to examine these immersive responses to cinema from three perspectives – the formal, the technological, and the neuroscientific – to describe the exact mechanisms through which a spectator’s immersion in a cinematic world is achieved. A second aim is to examine the historical development of the technologies of visual simulation that are used to create these immersive diegetic worlds. My analysis shows a consistent increase in the vividness and transparency of simulative technologies, two factors that are crucial determinants in a spectator’s immersion. In contrast to the cultural anxiety that often surrounds immersive responses to simulative technologies, I examine immersive spectatorship as an aesthetic phenomenon that is central to our engagement with cinema. The ubiquity of narrative – written, verbal, cinematic – shows that the ability to achieve immersion is a fundamental property of the human mind found in cultures diverse in both time and place. This thesis is thus an attempt to illuminate this unique human ability and examine the technologies that allow it to flourish

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen
    • 

    corecore