18 research outputs found

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    Compact and kinetic projected augmented reality interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 143-150).For quite some time, researchers and designers in the field of human computer interaction have strived to better integrate information interfaces into our physical environment. They envisioned a future where computing and interface components would be integrated into the physical environment, creating a seamless experience that uses all our senses. One possible approach to this problem employs projected augmented reality. Such systems project digital information and interfaces onto the physical world and are typically implemented using interactive projector-camera systems. This thesis work is centered on design and implementation of a new form factor for computing, a system we call LuminAR. LuminAR is a compact and kinetic projected augmented reality interface embodied in familiar everyday objects, namely a light bulb and a task light. It allows users to dynamically augment physical surfaces and objects with superimposed digital information using gestural and multi-touch interfaces. This thesis documents LuminAR's design process, hardware and software implementation and interaction techniques. The work is motivated through a set of applications that explore scenarios for interactive and kinetic projected augmented reality interfaces. It also opens the door for further explorations of kinetic interaction and promotes the adoption of projected augmented reality as a commonplace user interface modality. This thesis work was partially supported by a research grant from Intel Corporation.Supported by a research grant from Intel Corporationby Natan Linder.S.M

    Toolkit support for interactive projected displays

    Get PDF
    Interactive projected displays are an emerging class of computer interface with the potential to transform interactions with surfaces in physical environments. They distinguish themselves from other visual output technologies, for instance LCD screens, by overlaying content onto the physical world. They can appear, disappear, and reconfigure themselves to suit a range of application scenarios, physical settings, and user needs. These properties have attracted significant academic research interest, yet the surrounding technical challenges and lack of application developer tools limit adoption to those with advanced technical skills. These barriers prevent people with different expertise from engaging, iteratively evaluating deployments, and thus building a strong community understanding of the technology in context. We argue that creating and deploying interactive projected displays should take hours, not weeks. This thesis addresses these difficulties through the construction of a toolkit that effectively facilitates user innovation with interactive projected displays. The toolkit’s design is informed by a review of related work and a series of in-depth research probes that study different application scenarios. These findings result in toolkit requirements that are then integrated into a cohesive design and implementation. This implementation is evaluated to determine its strengths, limitations, and effectiveness at facilitating the development of applied interactive projected displays. The toolkit is released to support users in the real-world and its adoption studied. The findings describe a range of real application scenarios, case studies, and increase academic understanding of applied interactive projected display toolkits. By significantly lowering the complexity, time, and skills required to develop and deploy interactive projected displays, a diverse community of over 2,000 individual users have applied the toolkit to their own projects. Widespread adoption beyond the computer-science academic community will continue to stimulate an exciting new wave of interactive projected display applications that transfer computing functionality into physical spaces

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Full coverage displays for non-immersive applications

    Get PDF
    Full Coverage Displays (FCDs), which cover the interior surface of a room with display pixels, can create novel user interfaces taking advantage of natural aspects of human perception and memory which we make use of in our everyday lives. However, past research has generally focused on FCDs for immersive experiences, the required hardware is generally prohibitively expensive for the average potential user, configuration is complicated for developers and end users, and building applications which conform to the room layout is often difficult. The goals of this thesis are: to create an affordable, easy to use (for developers and end users) FCD toolkit for non-immersive applications; to establish efficient pointing techniques in FCD environments; and to explore suitable ways to direct attention to out-of-view targets in FCDs. In this thesis I initially present and evaluate my own "ASPECTA Toolkit" which was designed to meet the above requirements. Users during the main evaluation were generally positive about their experiences, all completing the task in less than three hours. Further evaluation was carried out through interviews with researchers who used ASPECTA in their own work. These revealed similarly positive results, with feedback from users driving improvements to the toolkit. For my exploration into pointing techniques, Mouse and Ray-Cast approaches were chosen as most appropriate for FCDs. An evaluation showed that the Ray-Cast approach was fastest overall, while a mouse-based approach showed a small advantage in the front hemisphere of the room. For attention redirection I implemented and evaluated a set of four visual techniques. The results suggest that techniques which are static and lead all the way to the target may have an advantage and that the cognitive processing time of a technique is an important consideration."This work was supported by the EPSRC (grant number EP/L505079/1) and SurfNet (NSERC)." - Acknowledgement

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Playful User Interfaces:Interfaces that Invite Social and Physical Interaction

    Get PDF

    Human-Computer Interaction

    Get PDF
    In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools
    corecore