143 research outputs found

    Out of sight, out of mind

    Get PDF
    This panel will present the outcomes of a two-week residency by a research team from the University of Brighton, School of Art and the University for the Creative Arts in September 2018 on the Mar Menor, a 170 km2 saltwater lagoon on the south east coast of Spain. The team were invited to undertake practice-based research on the changing ecosystem of this unique natural landscape, resulting from dam- ages caused by intensive agriculture, increased tourism and rising sea levels. The project and panel has been developed by a team of three artists, each bringing specific experience and knowledge of 360° video to undertake the research and create a unique understanding and manifestation of the changing ecosystem of the Mar Menor. This includes Paul Sermon who is currently working on collocated telematic experiences in 360° live video environments, Charlotte Gould’s work on developing immersive 360° animated augmented reality and Jeremiah Ambrose who is working on gaze controlled navigation through 360° video narratives. The overarching aim of this project is to create a unique interactive 360° video experience of the Mar Menor that manifests the anthropocene effects on this natural landscape as augmented surreal and metaphysical interpretations of the artist’s experiences during the residency. Through environmental, social, economic and cultural observations and en- counters the team are creating an immersive 360° installation environment that incorporates both video and audio recordings with augmented imaginary and predicted realities transformed from scientific data in obscure and profound guises

    Space shuttle visual simulation system design study

    Get PDF
    The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed

    Video Art: Cultural Transformations

    Get PDF
    In the 1960s, there were efforts to move broadcast television in the direction of the experimental video art by altering television\u27s conventional format. Fred Barzyk, in his role as a producer and director at WGBH-TV in Boston, was uniquely positioned to act as a link between television and experimental video artists who normally would not have had access to the technology available at a major broadcast facility. As the leading innovator in the beginnings of video art, the Korean American Nam June Paik (1932-2006) deserves special mention. His work bridges the worlds of art, video technology, and television. The video works of Nan June Paik, Amy Greenfield, Peter Campus, Feng Meng Bo, Elizabeth Sussman and other video artists are considered in this essay as key contributors to the development of video art. The selection is based on my experience with the artists cited. Despite video art\u27s growing popularity among contemporary artists in the 1970s and beyond, the museums were slow to acknowledge this development. One of the problems was deciding where, among the existing museum collections, to locate video art. In its 50 some years of history, video art has enjoyed a remarkable success in its artistic innovations while undergoing changes in formats virtually at the speed of rapid advances in electronic visual technology. Ironically, the legacy of creative television set in motion by Barzyk and his generation has been largely coopted by the television broadcasting industry, which mainly serves as a platform for mass media advertising

    The gesturing screen : art and screen agency within postmedia assemblages

    Full text link
    This thesis investigates screens as key elements in postmedia assemblages where multiple technical devices and media platforms function relationally to activate new capacities. I characterize the agency of screens as a gesturality that re-arranges and sustains medial relations with and between other components. The gestures of screens reformulate mediality, but also shift experiences and open elements to novel formations and affects. In each re- organisation of the postmedial assemblage, the function and relations of screens are not pre-defined but emerge in process. I develop this dynamic conception of screens and assemblages to account for their diverse manifestations in postmedia. Here I employ an agential realist framework found in the work of Karen Barad and draw on the concept of gestures set out by Giorgio Agamben. This research contributes to a new understanding of postmediality in conjunction with a new conception of the agency of screens. The thesis focuses on mainly digital screen-oriented artworks, repositioning these as heralding or firmly engaging with the postmedial condition. By challenging an understanding of screens that limits them to mere casings for images, this thesis expands the scope and role of screens in postmedia art practices stretching as far back as three decades. It argues that such art works and practices foreground the gesturality of screens and offers in-depth studies of works by Shilpa Gupta, Ulrike Gabriel, Natalie Bookchin, Blast Theory, Ragnar Kjartansson, Sandra Mujinga and Sondra Perry. Such works highlight how screens come to be relationally enacted in postmedia and how that enactment occurs through their performance of medial gestures. I identify two kinds of gestures of postmedia screens that support and connect the technical, aesthetic and in some cases political components of an assemblage. I turn to the multiplicity of frames both on-screen and distributed across screens observed by theorists of media such as Lev Manovich and Anne Friedberg. But the postmedial frame is consistently accompanied by what is out-of-frame – scrolling, swiping and ‘pinching’ continually calls on the out-of-frame to be moved on-screen. The out-of-frame is a postmedial screen gesture, then, that maintains an ongoing relation to the ‘inside’ of the frame, supporting and conditioning it. The multiple temporalities of postmedia assemblages – such as that of images, participants and software – allow an ‘out-of-frame’ to endure beyond the framed image. The second gesture is observed in the pervasiveness of chroma screens — blue and green screens used for compositing other images in postproduction. I suggest that this now ubiquitous technique suspends images from screens. Through a relational analysis of colour, and focusing on Perry’s work, I draw upon ways in which the blankness of chroma screens can be made to gesture a different enactment of race – ‘blackness’ as productive difference. Such gesture in postmedia entails the circulation and transference of social and cultural setting. These two gestures of screens highlight multiple dimensions of the relations of postmedial screens beyond that of framed images, offering us ways to be attentive to enactments of screens as they continue to gather relevance in our expanded visual setting. As screens multiply, this research suggests alternate ways of conceiving the aesthetics and experience of screens’ persistent medial configurations

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die VerfĂŒgbarkeit von interaktivem Ray-Tracing ebnet den Weg fĂŒr neue Anwendungen, aber auch fĂŒr die Verbesserung der QualitĂ€t bestehener Methoden. Da die heute verfĂŒgbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der ĂŒberzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfĂ€hige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfĂŒr eine exzellente Basis, aber auch Vorteile gegenĂŒber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing fĂŒr Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing fĂŒr Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die ModularitĂ€t einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das EinfĂŒgen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten GerĂ€t zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das EinfĂŒgen einer realen Person in eine kĂŒnstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und VerstĂ€ndnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen fĂŒr bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing fĂŒr Mixed Reality Anwendungen

    Trouble Every Day

    Get PDF
    My interests lie in the intersection of the public and private, the corporate and personal, especially with regard to self-representation within cultural power structures. Utilizing video and web technologies, performance, and painting, I create imagined realms of fantasy, desire, obsession, and anxiety. Operating within, but not bound by, feminist discourse, my work explores the vehicles and effects by which both analog and digital technologies influence the relationship between the self and the object of desire (whether physical or virtual, interior or exterior to the body) and have produced both progressive and regressive offspring. By performing the role of both producer of cultural archetypes and the compulsive consumer of signs, m­y characters embody the representation(s) of their source but, through action and voice, invent a mutant surrogate who dictates its own agency

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori
    • 

    corecore