560 research outputs found
VR/Urban: spread.gun - design process and challenges in developing a shared encounter for media façades
Designing novel interaction concepts for urban environments is not only a technical challenge in terms of scale, safety, portability and deployment, but also a challenge of designing for social configurations and spatial settings. To outline what it takes to create a consistent and interactive experience in urban space, we describe the concept and multidisciplinary design process of VR/Urban's media intervention tool called Spread.gun, which was created for the Media Façade Festival 2008 in Berlin. Main design aims were the anticipation of urban space, situational system configuration and embodied interaction. This case study also reflects on the specific technical, organizational and infrastructural challenges encountered when developing media façade installations
Recommended from our members
Sight, sound, the chicken and the egg: Audio-visual co-dependency in music
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Amongst the modern day abundance of audio-visual media, where sounds represent everything from the swooping of virtual cameras through 3D spaces to the pressing of buttons and receiving of emails, and conversely where VJs routinely accompany live musical performance with an increasingly sophisticated language of abstract computer animation, the notion of music as a necessarily exclusively aural medium seems somewhat out of place. Psychological theories relating to the cognition of sound, in particular physical schema, accounting for the ubiquity of vertical plane pitch metaphors in most musical cultures, provide evidence of a deep-rooted spatially informed understanding of sound thus providing a common ground for both sound and vision in music. Furthermore, Western Classical composition is rife with examples of visually conceived forms from Bachâs Crab Canon (1747) to Xenakisâ architecturally inspired Metastasis (1954). However, in practice the gap between the listenerâs auditory experience and the composerâs visual concept is often insurmountable. Rising to Schaefferâs call for âPrimacy to the ear!â (Schaeffer, 1967, pp. 28-30), acousmatic composers have sought to derive music exclusively from experientially verifiable criteria. However, in its pervasiveness of other musical genres, no doubt aided by technologically and commercially driven domination of the pre-recorded over the live listening experience in the latter half of the twentieth century, such an approach has lead to the neglect of visual aspects in the live performance of much art-music. This research aims to begin to redress this balance through the composition of, largely computer realised, audio-visual works whose conception arises not from a superimposition of one medium upon another, but through the very relations between the media themselves. Utilising modern computersâ ability to synchronise physical and virtual visual events with synthesised sound in real time not only affords composers an invaluable tool for enhancing listenerâs perception of formal structures but also implies causal relationships between the sonic and the visual which can provide a base of intuitive understanding on which more complex formal ideas can be built.Sponsored by the Brunel University Isambard Scholarship
Recommended from our members
Gestural patterns: a new method of printed textile design using motion capture technology
The aim of this research is to develop a new method, Hybrid Printing System (HPS) to explore digital craft methods to create surface patterns for printed textile design. This novel method of creating âhandcraftedâ prints is a result of the integration of two technologies such as motion-capture (MOCAP) and digital textile printing (DTP). The research towards such an innovation required a current, historical, contextual and experimental study of use of motion capture in Art &Design. The research contextualises the hand and its relationship to digital crafting methods in printed textile design, the digital medium and the process of audience participation in printed textile design to create a new conceptual framework balanced in practice and theory. The practical research then develops three new methods of motion capture such as, motion tracing, motion sensing and motion tracking to generate gestural motifs and gestural patterns. This thesis and the accompanying set of experimental work demonstrates that HPS culminates in developing new aesthetics through a new mode of creation in a new medium, which will impact the user, the designer and the product as a part of the cyclical process. HPS is an advancement of printed textile design, centred in active participation of its audience at the generative stage of design. This results in a shifting role of a designer and subverts the current model of printed textile design practice. HPS is a democratic design process where the participants design for themselves, have their own voice, which induces a sense of community, togetherness and harmony in the creative process
Montage As A Participatory System: Interactions with the Moving Image
Full version unavailable due to 3rd party copyright restrictionsRecent developments in network culture suggest a weakening of hierarchical narratives of power and representation. Online technologies of distributed authorship appear to nurture a complex, speculative, contradictory and contingent realism. Yet there is a continuing deficit where the moving image is concerned, its very form appearing resistant to the dynamic throughputs and change models of real-time interaction. If the task is not to suspend but encourage disbelief as a condition in the user, how can this be approached as a design problem? In the attempt to build a series of design projects suggesting open architectures for the moving image, might a variety of (pre-digital) precursors from the worlds of art, architecture and film offer the designer models for inspiration or adaptation? A series of projects have been undertaken. Each investigates the composite moving image, specifically in the context of real-time computation and interaction. This arose from a desire to interrogate the qualia of the moving image within interactive systems, relative to a range of behaviours and/or observer positions, which attempt to situate users as conscious compositors. This is explored in the thesis through reflecting on a series of experimental interfaces designed for real time composition in performance, exhibition and online contexts
Interactive Fiction in Cinematic Virtual Reality: Epistemology, Creation and Evaluation
This dissertation presents the Interactive Fiction in Cinematic Virtual Reality (IFcVR), an interactive digital narrative (IDN) that brings together the cinematic virtual reality (cVR) and the creation of virtual environments through 360\ub0 video within an interactive fiction (IF) structure. This work is structured in three components: an epistemological approach to this kind of narrative and media
hybrid; the creation process of IFcVR, from development to postproduction; and user evaluation of IFcVR. In order to set the foundations for the creation of interactive VR fiction films, I dissect the IFcVR by investigating the aesthetics, narratological and interactive notions that converge and diverge in it, proposing a medium-conscious narratology for this kind of artefact. This analysis led to
the production of an IFcVR functional prototype: \u201cZENA\u201d, the first interactive VR film shot in Genoa. ZENA\u2019s creation process is reported proposing some guidelines for interactive and immersive film-makers. In order to evaluate the effectiveness of the IFcVR as an entertaining narrative form and a vehicle for diverse types of messages, this study also proposes a methodology to measure User Experience (UX) on IFcVR. The full evaluation protocol gathers both qualitative and quantitative data through ad hoc instruments. The proposed protocol is illustrated through its pilot application on ZENA. Findings show interactors' positive acceptance of IFcVR as an entertaining experience
Mosaic narrative a poetics of cinematic new media narrative
This thesis proposes the Poetics of Mosaic Narrative as a tool for theorising the creation and telling of cinematic stories in a digital environment. As such the Poetics of Mosaic Narrative is designed to assist creators of new media narrative to design dramatically compelling screen based stories by drawing from established theories of cinema and emerging theories of new media. In doing so it validates the crucial element of cinematic storytelling in the digital medium, which due to its fragmentary, variable and re-combinatory nature, affords the opportunity for audience interaction.
The Poetics of Mosaic Narrative re-asserts the dramatic and cinematic nature of narrative in new media by drawing upon the dramatic theory of Aristotleâs Poetics, the cinematic theories of the 1920s Russian Film Theorists and contemporary Neo-Formalists, the narrative theories of the 1960s French Structuralists, and the scriptwriting theories of contemporary cinema. In particular it focuses on the theory and practice of the prominent new media theorist, Lev Manovich, as a means of investigating and creating a practical poetics.
The key element of the Poetics of Mosaic Narrative is the expansion of the previously forgotten and undeveloped Russian Formalist concept of cinematurgy which is vital to the successful development of new media storytelling theory and practice. This concept, as originally proposed but not elaborated by Kazansky, encompasses the notion of the creation of cinematic new media narrative as a mosaic â integrally driven by the narrative systems of plot, as well as the cinematic systems of visual style created by the techniques of cinema- montage, cinematography and mise-en-scene
Storytelling and mobile media: narratives for the mobile phone
The mobile phone epitomises the ability of media convergence to promote a synthesis of multiple digital technologies within the body of one portable device. In order to develop a methodology for the design and production of mobile narratives, it is necessary to examine and identify factors that may influence the creative possibilities for artists working in mobile media. The mobile phone is a ubiquitous portable device capable of generating and displaying narrative content in the form of voice communications, text, images and video. It could be said that hybrid devices such as the mobile phone can create hybrid narratives. It is the aim of this exegesis to outline the theories, concepts and artistic practices that inform the design, production and display of narrative content that utilizes the potential of the mobile phone as a tool for storytelling. Over the course of this exegesis I will examine examples of media projects that exploit the creative potential of portable networked media devices. I will also look to contemporary narrative theory, in particular Mieke Balâs theories on reframing and narrative gaps as a reference point for the design of my mobile phone narratives. A critical reflection on each of my narrative experiments that accompany this exegesis will outline the key concepts and creative strategies employed in the planning and production of my narrative experiments. This research is a contribution towards the existing body of research in the area of developing narratives for mobile media devices and will potentially act as a guide for future research
Applications of CSP solving in computer games (camera control)
While camera control systems of commercial 3D games have improved greatly in recent years, they are not as fully developed as are other game components such as graphics and physics engines. Bourne and Sattar (2006) have proposed a reactive constraint based third person perspective camera control system. We have extended the capability of their system to handle occlusion while following the main character, and have used camera cuts to find appropriate camera positions for a few difficult situations. We have developed a reactive constraint based third person perspective chase camera control system to follow a character in a 3D environment. The camera follows the character from (near) optimal positions defined by a camera profile. The desired values of the height and distance constraints of the camera profile are changed appropriately whenever the character enters a semi-enclosed or an enclosed area, and the desired value of the orientation constraint of the camera profile is changed incrementally whenever theoptimal camera view is obstructed. Camera cuts are used whenever the main character backs up to a wall or any other obstructions, or comes out of a semi-enclosed or an enclosed area. Two auxiliary cameras to observe the main camera positions from top and side views have been added. The chase camera control system achieved real-time performance while following the main character in a typical 3D environment, and maintained an optimal view based on a user specified/selected camera profile
- âŠ