2,140 research outputs found

    Exploring the Potential of 3D Visualization Techniques for Usage in Collaborative Design

    Get PDF
    Best practice for collaborative design demands good interaction between its collaborators. The capacity to share common knowledge about design models at hand is a basic requirement. With current advancing technologies gathering collective knowledge is more straightforward, as the dialog between experts can be supported better. The potential for 3D visualization techniques to become the right support tool for collaborative design is explored. Special attention is put on the possible usage for remote collaboration. The opportunities for current state-of-the-art visualization techniques from stereoscopic vision to holographic displays are researched. A classification of the various systems is explored with respect to their tangible usage for augmented reality. Appropriate interaction methods can be selected based on the usage scenario

    Cosmopolitcs and Virtual Environments in Architectural Design Studio Teaching: Collaborative Computer-Aided Strategies and Social and Environmental Equity

    Get PDF
    The necessity to combine sustainable methods in architectural and urban design and democratization calls for a shift from technical to the socio-technical perspectives within the field of architecture and urbanism, which is related to a need to reshape pedagogical agendas. At the center of the paper is the conviction that this endeavor of combining social and environmental equity in data-driven societies goes hand in hand with the intention of placing emphasis in critical thinking, self-reflection, social awareness, imagination, and activism in architectural education. The necessity to combine sustainable methods in architectural and urban design and democratization calls for a shift from technical to the socio-technical perspectives within the field of architecture and urbanism which is related to a need to reshape pedagogical agendas. At the center of the paper is the conviction that this endeavor of combining social and environmental equity in data-driven societies goes hand in hand with the intention of placing emphasis in critical thinking self-reflection social awareness imagination and activism in architectural education. To shed light on the role of cosmopolitan citizenship in reshaping architectural education the paper examines how “cosmopolitics” as ecology of practices can help us reinvent the relationship between individual subjectivity and collective subjectivity in architectural education. The paper also intends to examine two issues: firstly, the mutation of the status of the architectural artefact because of the fact that the form is generated through the use of digital tools; secondly, the implications of the possibility of real-time data visualisation for the reconceptualization of the notion of spatiality

    Dance of the bulrushes: building conversations between social creatures

    Get PDF
    The interactive installation is in vogue. Interaction design and physical installations are accepted fixtures of modern life, and with these technology-driven installations beginning to exert influence on modes of mass communication and general expectations for user experiences, it seems appropriate to explore the variety of interactions that exist. This paper surveys a number of successful projects with a critical eye toward assessing the type of communication and/or conversation generated between interactive installations and human participants. Moreover, this exploration seeks to identify whether specific tactics and/or technologies are particularly suited to engendering layers of dialogue or ‘conversations’ within interactive physical computing installations. It is asserted that thoughtful designs incorporating self-organizational abilities can foster rich dialogues in which participants and the installation collaboratively generate value in the interaction. To test this hypothesis an interactive installation was designed and deployed in locations in and around London. Details of the physical objects and employed technologies are discussed, and results of the installation sessions are shown to corroborate the key tenets of this argument in addition to highlighting other concerns that are specifically relevant to the broad topic of interactive design

    Chroma Key wthout Color Restrictions based on Asynchronous Amplitude Modulation of Background Illumination on Retroreflective Screens

    Full text link
    A simple technique to avoid color limitations in image capture systems based on chroma key video composition using retroreflective screens and light-emitting diodes (LED) rings is proposed and demonstrated. The combination of an asynchronous temporal modulation onto the background illumination and simple image processing removes the usual restrictions on foreground colors in the scene. The technique removes technical constraints in stage composition, allowing its design to be purely based on artistic grounds. Since it only requires adding a very simple electronic circuit to widely used chroma keying hardware based on retroreflective screens, the technique is easily applicable to TV and filming studios.Vidal Rodriguez, B.; Lafuente, JA. (2016). Chroma Key wthout Color Restrictions based on Asynchronous Amplitude Modulation of Background Illumination on Retroreflective Screens. Journal of Electronic Imaging. 25(2):230091-230095. doi:10.1117/1.JEI.25.2.023009S23009123009525

    Coded Projection and Illumination for Television Studios

    Get PDF
    We propose the application of temporally and spatially coded projection and illumination in modern television studios. In our vision, this supports ad-hoc re-illumination, automatic keying, unconstrained presentation of moderation information, camera-tracking, and scene acquisition. In this paper we show how a new adaptive imperceptible pattern projection that considers parameters of human visual perception, linked with real-time difference keying enables an in-shot optical tracking using a novel dynamic multi-resolution marker techniqu

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der GrundbedĂŒrfnisse der Menschen. WĂ€hrend frĂŒher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man spĂ€ter, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten HĂ€lfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar AnschĂŒtz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. WĂ€hrend diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschrĂ€nkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, fĂŒhrten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen
    • 

    corecore