399 research outputs found

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der GrundbedĂŒrfnisse der Menschen. WĂ€hrend frĂŒher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man spĂ€ter, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten HĂ€lfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar AnschĂŒtz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. WĂ€hrend diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschrĂ€nkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, fĂŒhrten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..

    Slipstream, a data rich production environment

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1990.Includes bibliographical references (leaves 93-94).by Alan Lasky.M.S

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Towards achieving convincing live interaction in a mixed reality environment for television studios

    Get PDF
    The virtual studio is a form of Mixed Reality environment for creating television programmes, where the (real) actor appears to exist within an entirely virtual set. The work presented in this thesis evaluates the routes required towards developing a virtual studio that extends from current architectures in allowing realistic interactions between the actor and the virtual set in real-time. The methodologies and framework presented in this thesis is intended to support future work in this domain. Heuristic investigation is offered as a framework to analyse and provide the requirements for developing interaction within a virtual studio. In this framework a group of experts participate in case study scenarios to generate a list of requirements that guide future development of the technology. It is also concluded that this method could be used in a cyclical manner to further refine systems postdevelopment. This leads to the development of three key areas. Firstly a feedback system is presented, which tracks actor head motion within the studio and provides dynamic visual feedback relative to their current gaze location. Secondly a real-time actor/virtual set occlusion system that uses skeletal tracking data and depth information to change the relative location of virtual set elements dynamically is developed. Finally an interaction system is presented that facilitates real-time interaction between an actor and the virtual set objects, providing both single handed and bimanual interactions. Evaluation of this system highlights some common errors in mixed reality interaction, notably those arising from inaccurate hand placement when actors perform bimanual interactions. A novel two stage framework is presented that measures the magnitude of the errors in actor hand placement, and also, the perceived fidelity of the interaction from a third person viewer. The first stage of this framework quantifies the actor motion errors while completing a series of interaction tasks under varying controls. The second stage uses examples of these errors to measure the perceptual tolerance of a third person when viewing interaction errors in the end broadcast. The results from this two stage evaluation lead to the development of three methods for mitigating the actor errors, with each evaluated against its ability to aid in the visual fidelity of the interaction. It was discovered that the adapting the size of the virtual object was effective in improving the quality of the interaction, whereas adapting the colour of any exposed background did not have any apparent effects. Finally a set of guidelines based on these findings is provided to recommend appropriate solutions that can be applied for allowing interaction within live virtual studio environments that can easily be adapted for other mixed reality systems

    Video in the Abyss: In the context of the digital, is analogue video feedback still useful as an approach to making art?

    Get PDF
    Video feedback systems have been largely discounted by media artists in favour of digital tools and code-based programming languages, which offer a more robust, data-driven approach to developing generative and interactive moving image works. Furthermore, art historians have generally failed to document and reflect on the practice of video feedback. However, video feedback systems have many qualities that can enrich current digital culture, whilst digital tools provide an opportunity for artists to revisit analogue feedback from a fresh perspective. This thesis and accompanying portfolio reappraises the use of feedback systems in media art, and explores their application in combination with digital tools such as projection mapping software. Through practice-based research, analysis of contemporary media art works, and interviews with artists and curators, this thesis identifies and analyses the key technological and experiential properties of video feedback installations from the perspectives of both artist and audience. The works produced proved to be extremely engaging for audiences. Comments from experts within the field suggest that key factors include the mesmerising elemental forms and textures of feedback, and the intuitive nature of the interface. One work (PORTALS) was also shortlisted for the Lumen Prize for Art and Technology. Video feedback works still present unique problems: they are difficult to calibrate, often unpredictable or even unrepeatable. However, this thesis concludes that there are significant benefits in revisiting this 50 year old video art technique from a contemporary digital perspective. Digital video tools offer new ways to generate, calibrate, and present video feedback in various contexts. Conversely, the incorporation of optical or analogue feedback into digital systems can offer a simple method of generating complex textures and chaotic behaviour without the need for programming skills, as well as providing an extremely intuitive interface for audience interaction via the video camera. The thesis ends by suggesting that more research needs to be done to examine how feedback installations can be made more robust and scalable across a range of contexts from white cube galleries to light festivals

    The Core Skills of VFX Repository

    Get PDF

    Interdisciplinarity in the Age of the Triple Helix: a Film Practitioner's Perspective

    Get PDF
    This integrative chapter contextualises my research including articles I have published as well as one of the creative artefacts developed from it, the feature film The Knife That Killed Me. I review my work considering the ways in which technology, industry methods and academic practice have evolved as well as how attitudes to interdisciplinarity have changed, linking these to Etzkowitz and Leydesdorff’s ‘Triple Helix’ model (1995). I explore my own experiences and observations of opportunities and challenges that have been posed by the intersection of different stakeholder needs and expectations, both from industry and academic perspectives, and argue that my work provides novel examples of the applicability of the ‘Triple Helix’ to the creative industries. The chapter concludes with a reflection on the evolution and direction of my work, the relevance of the ‘Triple Helix’ to creative practice, and ways in which this relationship could be investigated further

    Vector synthesis: a media archaeological investigation into sound-modulated light

    Get PDF
    Vector Synthesis is a computational art project inspired by theories of media archaeology, by the history of computer and video art, and by the use of discarded and obsolete technologies such as the Cathode Ray Tube monitor. This text explores the military and techno-scientific legacies at the birth of modern computing, and charts attempts by artists of the subsequent two decades to decouple these tools from their destructive origins. Using this history as a basis, the author then describes a media archaeological, real time performance system using audio synthesis and vector graphics display techniques to investigate direct, synesthetic relationships between sound and image. Key to this system, realized in the Pure Data programming environment, is a didactic, open source approach which encourages reuse and modification by other artists within the experimental audiovisual arts community.Holzer, Dere
    • 

    corecore