46,286 research outputs found

    Mixed Reality Architecture: a dynamic architectural topology

    Get PDF
    Architecture can be shown to structure patterns of co-presence and in turn to be structured itself by the rules and norms of the society present within it. This two-way relationship exists in a surprisingly stable framework, as fundamental changes to buildings are slow and costly. At the same time, change within organisations is increasingly rapid and buildings are used to accommodate some of that change. This adaptation can be supported by the use of telecommunication technologies, overcoming the need for co-presence during social interaction. However, often this results in a loss of accountability or ‘civic legibility’, as the link between physical location and social activity is broken. In response to these considerations, Mixed Reality Architecture (MRA) was developed. MRA links multiple physical spaces across a shared 3D virtual world. We report on the design of MRA, including the key concept of the Mixed Reality Architectural Cell, a novel architectural interface between architectural spaces that are remote to each other. An in-depth study lasting one year and involving six office-based MRACells, used video recordings, the analysis of event logs, diaries and an interview survey. This produced a series of ethnographic vignettes describing social interaction within MRA in detail. In this paper we concentrate on the topological properties of MRA. It can be shown that the dynamic topology of MRA and social interaction taking place within it are fundamentally intertwined. We discuss how topological adjacencies across virtual space change the integration of the architectural spaces that MRA is installed in. We further reflect on how the placement of MRA technology in different parts of an office space (deep or shallow) impacts on the nature of that particular space. Both the above can be shown to influence movement through the building and social interaction taking place within it. These findings are directly relevant to new buildings that need to be designed to accommodate organisational change in future but also to existing building stock that might be very hard to adapt. We are currently expanding the system to new sites and are planning changes to the infrastructure of MRA as well as its interactional interface

    Examining the role of smart TVs and VR HMDs in synchronous at-a-distance media consumption

    Get PDF
    This article examines synchronous at-a-distance media consumption from two perspectives: How it can be facilitated using existing consumer displays (through TVs combined with smartphones), and imminently available consumer displays (through virtual reality (VR) HMDs combined with RGBD sensing). First, we discuss results from an initial evaluation of a synchronous shared at-a-distance smart TV system, CastAway. Through week-long in-home deployments with five couples, we gain formative insights into the adoption and usage of at-a-distance media consumption and how couples communicated during said consumption. We then examine how the imminent availability and potential adoption of consumer VR HMDs could affect preferences toward how synchronous at-a-distance media consumption is conducted, in a laboratory study of 12 pairs, by enhancing media immersion and supporting embodied telepresence for communication. Finally, we discuss the implications these studies have for the near-future of consumer synchronous at-a-distance media consumption. When combined, these studies begin to explore a design space regarding the varying ways in which at-a-distance media consumption can be supported and experienced (through music, TV content, augmenting existing TV content for immersion, and immersive VR content), what factors might influence usage and adoption and the implications for supporting communication and telepresence during media consumption

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Wearable learning tools

    Get PDF
    In life people must learn whenever and wherever they experience something new. Until recently computing technology could not support such a notion, the constraints of size, power and cost kept computers under the classroom table, in the office or in the home. Recent advances in miniaturization have led to a growing field of research in ‘wearable’ computing. This paper looks at how such technologies can enhance computer‐mediated communications, with a focus upon collaborative working for learning. An experimental system, MetaPark, is discussed, which explores communications, data retrieval and recording, and navigation techniques within and across real and virtual environments. In order to realize the MetaPark concept, an underlying network architecture is described that supports the required communication model between static and mobile users. This infrastructure, the MUON framework, is offered as a solution to provide a seamless service that tracks user location, interfaces to contextual awareness agents, and provides transparent network service switching

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Levitating Particle Displays with Interactive Voxels

    Get PDF
    Levitating objects can be used as the primitives in a new type of display. We present levitating particle displays and show how research into object levitation is enabling a new way of presenting and interacting with information. We identify novel properties of levitating particle displays and give examples of the interaction techniques and applications they allow. We then discuss design challenges for these displays, potential solutions, and promising areas for future research
    • 

    corecore