33 research outputs found

    Cruiser and PhoTable: Exploring Tabletop User Interface Software for Digital Photograph Sharing and Story Capture

    Get PDF
    Digital photography has not only changed the nature of photography and the photographic process, but also the manner in which we share photographs and tell stories about them. Some traditional methods, such as the family photo album or passing around piles of recently developed snapshots, are lost to us without requiring the digital photos to be printed. The current, purely digital, methods of sharing do not provide the same experience as printed photographs, and they do not provide effective face-to-face social interaction around photographs, as experienced during storytelling. Research has found that people are often dissatisfied with sharing photographs in digital form. The recent emergence of the tabletop interface as a viable multi-user direct-touch interactive large horizontal display has provided the hardware that has the potential to improve our collocated activities such as digital photograph sharing. However, while some software to communicate with various tabletop hardware technologies exists, software aspects of tabletop user interfaces are still at an early stage and require careful consideration in order to provide an effective, multi-user immersive interface that arbitrates the social interaction between users, without the necessary computer-human interaction interfering with the social dialogue. This thesis presents PhoTable, a social interface allowing people to effectively share, and tell stories about, recently taken, unsorted digital photographs around an interactive tabletop. In addition, the computer-arbitrated digital interaction allows PhoTable to capture the stories told, and associate them as audio metadata to the appropriate photographs. By leveraging the tabletop interface and providing a highly usable and natural interaction we can enable users to become immersed in their social interaction, telling stories about their photographs, and allow the computer interaction to occur as a side-effect of the social interaction. Correlating the computer interaction with the corresponding audio allows PhoTable to annotate an automatically created digital photo album with audible stories, which may then be archived. These stories remain useful for future sharing -- both collocated sharing and remote (e.g. via the Internet) -- and also provide a personal memento both of the event depicted in the photograph (e.g. as a reminder) and of the enjoyable photo sharing experience at the tabletop. To provide the necessary software to realise an interface such as PhoTable, this thesis explored the development of Cruiser: an efficient, extensible and reusable software framework for developing tabletop applications. Cruiser contributes a set of programming libraries and the necessary application framework to facilitate the rapid and highly flexible development of new tabletop applications. It uses a plugin architecture that encourages code reuse, stability and easy experimentation, and leverages the dedicated computer graphics hardware and multi-core processors of modern consumer-level systems to provide a responsive and immersive interactive tabletop user interface that is agnostic to the tabletop hardware and operating platform, using efficient, native cross-platform code. Cruiser's flexibility has allowed a variety of novel interactive tabletop applications to be explored by other researchers using the framework, in addition to PhoTable. To evaluate Cruiser and PhoTable, this thesis follows recommended practices for systems evaluation. The design rationale is framed within the above scenario and vision which we explore further, and the resulting design is critically analysed based on user studies, heuristic evaluation and a reflection on how it evolved over time. The effectiveness of Cruiser was evaluated in terms of its ability to realise PhoTable, use of it by others to explore many new tabletop applications, and an analysis of performance and resource usage. Usability, learnability and effectiveness of PhoTable was assessed on three levels: careful usability evaluations of elements of the interface; informal observations of usability when Cruiser was available to the public in several exhibitions and demonstrations; and a final evaluation of PhoTable in use for storytelling, where this had the side effect of creating a digital photo album, consisting of the photographs users interacted with on the table and associated audio annotations which PhoTable automatically extracted from the interaction. We conclude that our approach to design has resulted in an effective framework for creating new tabletop interfaces. The parallel goal of exploring the potential for tabletop interaction as a new way to share digital photographs was realised in PhoTable. It is able to support the envisaged goal of an effective interface for telling stories about one's photos. As a serendipitous side-effect, PhoTable was effective in the automatic capture of the stories about individual photographs for future reminiscence and sharing. This work provides foundations for future work in creating new ways to interact at a tabletop and to the ways to capture personal stories around digital photographs for sharing and long-term preservation

    Cruiser and PhoTable: Exploring Tabletop User Interface Software for Digital Photograph Sharing and Story Capture

    Get PDF
    Digital photography has not only changed the nature of photography and the photographic process, but also the manner in which we share photographs and tell stories about them. Some traditional methods, such as the family photo album or passing around piles of recently developed snapshots, are lost to us without requiring the digital photos to be printed. The current, purely digital, methods of sharing do not provide the same experience as printed photographs, and they do not provide effective face-to-face social interaction around photographs, as experienced during storytelling. Research has found that people are often dissatisfied with sharing photographs in digital form. The recent emergence of the tabletop interface as a viable multi-user direct-touch interactive large horizontal display has provided the hardware that has the potential to improve our collocated activities such as digital photograph sharing. However, while some software to communicate with various tabletop hardware technologies exists, software aspects of tabletop user interfaces are still at an early stage and require careful consideration in order to provide an effective, multi-user immersive interface that arbitrates the social interaction between users, without the necessary computer-human interaction interfering with the social dialogue. This thesis presents PhoTable, a social interface allowing people to effectively share, and tell stories about, recently taken, unsorted digital photographs around an interactive tabletop. In addition, the computer-arbitrated digital interaction allows PhoTable to capture the stories told, and associate them as audio metadata to the appropriate photographs. By leveraging the tabletop interface and providing a highly usable and natural interaction we can enable users to become immersed in their social interaction, telling stories about their photographs, and allow the computer interaction to occur as a side-effect of the social interaction. Correlating the computer interaction with the corresponding audio allows PhoTable to annotate an automatically created digital photo album with audible stories, which may then be archived. These stories remain useful for future sharing -- both collocated sharing and remote (e.g. via the Internet) -- and also provide a personal memento both of the event depicted in the photograph (e.g. as a reminder) and of the enjoyable photo sharing experience at the tabletop. To provide the necessary software to realise an interface such as PhoTable, this thesis explored the development of Cruiser: an efficient, extensible and reusable software framework for developing tabletop applications. Cruiser contributes a set of programming libraries and the necessary application framework to facilitate the rapid and highly flexible development of new tabletop applications. It uses a plugin architecture that encourages code reuse, stability and easy experimentation, and leverages the dedicated computer graphics hardware and multi-core processors of modern consumer-level systems to provide a responsive and immersive interactive tabletop user interface that is agnostic to the tabletop hardware and operating platform, using efficient, native cross-platform code. Cruiser's flexibility has allowed a variety of novel interactive tabletop applications to be explored by other researchers using the framework, in addition to PhoTable. To evaluate Cruiser and PhoTable, this thesis follows recommended practices for systems evaluation. The design rationale is framed within the above scenario and vision which we explore further, and the resulting design is critically analysed based on user studies, heuristic evaluation and a reflection on how it evolved over time. The effectiveness of Cruiser was evaluated in terms of its ability to realise PhoTable, use of it by others to explore many new tabletop applications, and an analysis of performance and resource usage. Usability, learnability and effectiveness of PhoTable was assessed on three levels: careful usability evaluations of elements of the interface; informal observations of usability when Cruiser was available to the public in several exhibitions and demonstrations; and a final evaluation of PhoTable in use for storytelling, where this had the side effect of creating a digital photo album, consisting of the photographs users interacted with on the table and associated audio annotations which PhoTable automatically extracted from the interaction. We conclude that our approach to design has resulted in an effective framework for creating new tabletop interfaces. The parallel goal of exploring the potential for tabletop interaction as a new way to share digital photographs was realised in PhoTable. It is able to support the envisaged goal of an effective interface for telling stories about one's photos. As a serendipitous side-effect, PhoTable was effective in the automatic capture of the stories about individual photographs for future reminiscence and sharing. This work provides foundations for future work in creating new ways to interact at a tabletop and to the ways to capture personal stories around digital photographs for sharing and long-term preservation

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Probe-based visual analysis of geospatial simulations

    Get PDF
    This work documents the design, development, refinement, and evaluation of probes as an interaction technique for expanding both the usefulness and usability of geospatial visualizations, specifically those of simulations. Existing applications that allow the visualization of, and interaction with, geospatial simulations and their results generally present views of the data that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in, spatial awareness and comparison between regions become limited. The probe-based interaction model integrates coordinated visualizations within individual probe interfaces, which depict the local data in user-defined regions-of-interest. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. The technique has been incorporated into a number of geospatial simulations and visualization tools. In each of these applications, and in general, probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users. The great freedom afforded to users in defining regions-of-interest can cause modifiable areal unit problems to affect the reliability of analyses without the user’s knowledge, leading to misleading results. However, by automatically alerting the user to these potential issues, and providing them tools to help adjust their selections, these unforeseen problems can be revealed, and even corrected

    Calibration Methods for Head-Tracked 3D Displays

    Get PDF
    Head-tracked 3D displays can provide a compelling 3D effect, but even small inaccuracies in the calibration of the participant’s viewpoint to the display can disrupt the 3D illusion. We propose a novel interactive procedure for a participant to easily and accurately calibrate a head-tracked display by visually aligning patterns across a multi-screen display. Head-tracker measurements are then calibrated to these known viewpoints. We conducted a user study to evaluate the effectiveness of different visual patterns and different display shapes. We found that the easiest to align shape was the spherical display and the best calibration pattern was the combination of circles and lines. We performed a quantitative camera-based calibration of a cubic display and found visual calibration outperformed manual tuning and generated viewpoint calibrations accurate to within a degree. Our work removes the usual, burdensome step of manual calibration when using head-tracked displays and paves the way for wider adoption of this inexpensive and effective 3D display technology

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    tCAD: a 3D modeling application on a depth enhanced tabletop computer

    Get PDF
    Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.Universidade da Madeir

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern
    corecore