269 research outputs found

    Collaborative video searching on a tabletop

    Get PDF
    Almost all system and application design for multimedia systems is based around a single user working in isolation to perform some task yet much of the work for which we use computers to help us, is based on working collaboratively with colleagues. Groupware systems do support user collaboration but typically this is supported through software and users still physically work independently. Tabletop systems, such as the DiamondTouch from MERL, are interface devices which support direct user collaboration on a tabletop. When a tabletop is used as the interface for a multimedia system, such as a video search system, then this kind of direct collaboration raises many questions for system design. In this paper we present a tabletop system for supporting a pair of users in a video search task and we evaluate the system not only in terms of search performance but also in terms of user–user interaction and how different user personalities within each pair of searchers impacts search performance and user interaction. Incorporating the user into the system evaluation as we have done here reveals several interesting results and has important ramifications for the design of a multimedia search system

    The effects of tool container location on user performance in graphical user interfaces

    Get PDF
    A common way of organizing Windows, Icons, Menus, and Pointers (WIMP) interfaces is to group tools into tool containers, providing one visual representation. Common tool containers include toolbars and menus, as well as more complex tool containers, like Microsoft Office’s Ribbon, Toolglasses, and marking menus. The location of tool containers has been studied extensively in the past using Fitts’s Law, which governs selection time; however, selection time is only one aspect of user performance. In this thesis, I show that tool container location affects other aspects of user performance, specifically attention and awareness. The problem investigated in this thesis is that designers lack an understanding of the effects of tool container location on two important user performance factors: attention and group awareness. My solution is to provide an initial understanding of the effects of tool container location on these factors. In solving this problem, I developed a taxonomy of tool container location, and carried out two research studies. The two research studies investigated tool container location in two contexts: single-user performance with desktop interfaces, and group performance in tabletop interfaces. Through the two studies, I was able to show that tool container location does affect attention and group awareness, and to provide new recommendations for interface designers

    The tool space

    Get PDF
    Visions of futuristic desktop computer work spaces have often incorporated large interactive surfaces that either integrate into or replace the prevailing desk setup with displays, keyboard and mouse. Such visions often connote the distinct characteristics of direct touch interaction, e.g. by transforming the desktop into a large touch screen that allows interacting with content using one’s bare hands. However, the role of interactive surfaces for desktop computing may not be restricted to enabling direct interaction. Especially for prolonged interaction times, the separation of visual focus and manual input has proven to be ergonomic and is usually supported by vertical monitors and separate – hence indirect – input devices placed on the horizontal desktop. If we want to maintain this ergonomically matured style of computing with the introduction of interactive desktop displays, the following question arises: How can and should this novel input and output modality affect prevailing interaction techniques. While touch input devices have been used for decades in desktop computing as track pads or graphic tablets, the dynamic rendering of content and increasing physical dimensions of novel interactive surfaces open up new design opportunities for direct, indirect and hybrid touch input techniques. Informed design decisions require a careful consideration of the relationship between input sensing, visual display and applied interaction styles. Previous work in the context of desktop computing has focused on understanding the dual-surface setup as a holistic unit that supports direct touch input and allows the seamless transfer of objects across horizontal and vertical surfaces. In contrast, this thesis assumes separate spaces for input (horizontal input space) and output (vertical display space) and contributes to the understanding of how interactive surfaces can enrich indirect input for complex tasks, such as 3D modeling or audio editing. The contribution of this thesis is threefold: First, we present a set of case studies on user interface design for dual-surface computer workspaces. These case studies cover several application areas such as gaming, music production and analysis or collaborative visual layout and comprise formative evaluations. On the one hand, these case studies highlight the conflict that arises when the direct touch interaction paradigm is applied to dual-surface workspaces. On the other hand, they indicate how the deliberate avoidance of established input devices (i.e. mouse and keyboard) leads to novel design ideas for indirect touch-based input. Second, we introduce our concept of the tool space as an interaction model for dual-surface workspaces, which is derived from a theoretical argument and the previous case studies. The tool space dynamically renders task-specific input areas that enable spatial command activation and increase input bandwidth through leveraging multi-touch and two-handed input. We further present evaluations of two concept implementations in the domains 3D modeling and audio editing which demonstrate the high degrees of control, precision and sense of directness that can be achieved with our tools. Third, we present experimental results that inform the design of the tool space input areas. In particular, we contribute a set of design recommendations regarding the understanding of two-handed indirect multi-touch input and the impact of input area form factors on spatial cognition and navigation performance.Zukunftsvisionen thematisieren zuweilen neuartige, auf großen interaktiven OberflĂ€chen basierende ComputerarbeitsplĂ€tze, wobei etablierte PC-Komponenten entweder ersetzt oder erweitert werden. Oft schwingt bei derartigen Konzepten die Idee von natĂŒrlicher oder direkter Toucheingabe mit, die es beispielsweise erlaubt mit den Fingern direkt auf virtuelle Objekte auf einem großen Touchscreen zuzugreifen. Die Eingabe auf interaktiven OberflĂ€chen muss aber nicht auf direkte Interaktionstechniken beschrĂ€nkt sein. Gerade bei lĂ€ngerer Benutzung ist aus ergonomischer Sicht eine Trennung von visuellem Fokus und manueller Eingabe von Vorteil, wie es zum Beispiel bei der Verwendung von Monitoren und den gĂ€ngigen EingabegerĂ€ten der Fall ist. Soll diese Art der Eingabe auch bei ComputerarbeitsplĂ€tzen unterstĂŒtzt werden, die auf interaktiven OberflĂ€chen basieren, dann stellt sich folgende Frage: Wie wirken sich die neuen Ein- und AusgabemodalitĂ€ten auf vorherrschende Interaktionstechniken aus? Toucheingabe kommt beim klassischen Desktop-Computing schon lange zur Anwendung: Im Gegensatz zu sogenannten Trackpads oder Grafiktabletts eröffnen neue interaktive OberflĂ€chen durch ihre visuellen Darstellungsmöglichkeiten und ihre GrĂ¶ĂŸe neue Möglichkeiten fĂŒr das Design von direkten, indirekten oder hybriden Eingabetechniken. Fundierte Designentscheidungen erfordern jedoch eine sorgfĂ€ltige Auseinandersetzung mit Ein- und Ausgabetechnologien sowie adequaten Interaktionsstilen. Verwandte Forschungsarbeiten haben sich auf eine konzeptuelle Vereinheitlichung von Arbeitsbereichen konzentriert, die es beispielsweise erlaubt, digitale Objekte mit dem Finger zwischen horizontalen und vertikalen Arbeitsbereichen zu verschieben. Im Gegensatz dazu geht die vorliegende Arbeit von logisch und rĂ€umlich getrennten Bereichen aus: Die horizontale interaktive OberflĂ€che dient primĂ€r zur Eingabe, wĂ€hrend die vertikale als Display fungiert. Insbesondere trĂ€gt diese Arbeit zu einem VerstĂ€ndnis bei, wie durch eine derartige Auffassung interaktiver OberflĂ€chen komplexe Aufgaben, wie zum Beispiel 3D-Modellierung oder Audiobearbeitung auf neue und gewinnbringende Art und Weise unterstĂŒtzt werden können. Der wissenschaftliche Beitrag der vorliegenden Arbeit lĂ€sst sich in drei Bereiche gliedern: ZunĂ€chst werden Fallstudien prĂ€sentiert, die anhand konkreter Anwendungen (z.B. Spiele, Musikproduktion, kollaboratives Layout) neuartige Nutzerschnittstellen fĂŒr ComputerarbeitsplĂ€tze explorieren und evaluieren, die horizontale und vertikale interaktive OberflĂ€chen miteinander verbinden. Einerseits verdeutlichen diese Fallstudien verschiedene Konflikte, die bei der Anwendung von direkter Toucheingabe an solchen ComputerarbeitsplĂ€tzen hervorgerufen werden. Andererseits zeigen sie auf, wie der bewusste Verzicht auf etablierte EingabegerĂ€te zu neuen Toucheingabe-Konzepten fĂŒhren kann. In einem zweiten Schritt wird das Toolspace-Konzept als Interaktionsmodell fĂŒr ComputerarbeitsplĂ€tze vorgestellt, die auf einem Verbund aus horizontaler und vertikaler interaktiver OberflĂ€che bestehen. Dieses Modell ergibt sich aus den vorangegangenen Fallstudien und wird zusĂ€tzlich theoretisch motiviert. Der Toolspace stellt anwendungsspezifische und dynamische EingabeflĂ€chen dar, die durch rĂ€umliche Aktivierung und die UnterstĂŒtzung beidhĂ€ndiger Multitouch-Eingabe die Eingabebandbreite erhöhen. Diese Idee wird anhand zweier Fallstudien illustriert und evaluiert, die zeigen, dass dadurch ein hohes Maß an Kontrolle und Genauigkeit erreicht sowie ein GefĂŒhl von Direktheit vermittelt wird. Zuletzt werden Studienergebnisse vorgestellt, die Erkenntnisse zum Entwurf von EingabeflĂ€chen im Tool Space liefern, insbesondere zu den Themen beidhĂ€ndige indirekte Multitouch-Eingabe sowie zum Einfluss von Formfaktoren auf rĂ€umliche Kognition und Navigation

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Citizen engagement through tangible data representation

    Get PDF
    We begin with the premise that data literacy is a fundamental facet of citizen education in this information age, and that an engaged citizenry in a democracy not only requires access to data, but also the capacity to manipulate and examine the data from multiple perspectives. The visualization of data elucidates trends and patterns in the phenomena that the data represents, and opens accessibility to understanding complicated human and natural processes represented by data sets. Research indicates that interacting with a visualization amplifies cognition and analysis. A single visualization may show only one facet of the data. To examine the data from multiple perspectives, engaged citizens need to be able to construct their own visualizations from a data set. Many tools for data visualization have responded to this need, allowing non-data experts to manipulate and gain insights into their data, but most of these tools are restricted to the computer screen, keyboard, and mouse. Cognition and analysis may be strengthened even more through embodied interaction with data. We present here the rationale for the design of a tool that allows users to probe a data set, through interactions with graspable (tangible) three-dimensional objects, rather than through a keyboard and mouse interaction. We argue that the use of tangibles facilitates understanding abstract concepts, and facilitates many concrete learning scenarios. Another advantage of using tangibles over screen-based tools is that they foster collaboration, which can promote a productive working and learning environment. We speculate that collaborative data exploration can be a productive educational activity for citizens in their communities and in the classroom, and we suggest our tool as a means to do this

    Division of labour and sharing of knowledge for synchronous collaborative information retrieval

    Get PDF
    Synchronous collaborative information retrieval (SCIR) is concerned with supporting two or more users who search together at the same time in order to satisfy a shared information need. SCIR systems represent a paradigmatic shift in the way we view information retrieval, moving from an individual to a group process and as such the development of novel IR techniques is needed to support this. In this article we present what we believe are two key concepts for the development of effective SCIR namely division of labour (DoL) and sharing of knowledge (SoK). Together these concepts enable coordinated SCIR such that redundancy across group members is reduced whilst enabling each group member to benefit from the discoveries of their collaborators. In this article we outline techniques from state-of-the-art SCIR systems which support these two concepts, primarily through the provision of awareness widgets. We then outline some of our own work into system-mediated techniques for division of labour and sharing of knowledge in SCIR. Finally we conclude with a discussion on some possible future trends for these two coordination techniques

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Måster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Blended Interaction Spaces for Distributed Team Collaboration

    Get PDF

    Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces

    Get PDF
    Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration

    The role of personal and shared displays in scripted collaborative learning

    Get PDF
    Over the last decades collaborative learning has gained immensely in importance and popularity due to its high potential. Unfortunately, learners rarely engage in effective learning activities unless they are provided with instructional support. In order to maximize learning outcomes it is therefore advisable to structure collaborative learning sessions. One way of doing this is using collaboration scripts, which define a sequence of activities to be carried out by the learners. The field of computer-supported collaborative learning (CSCL) produced a variety of collaboration scripts that proved to have positive effects on learning outcomes. These scripts provide detailed descriptions of successful learning scenarios and are therefore used as foundation for this thesis. In many cases computers are used to support collaborative learning. Traditional personal computers are often chosen for this purpose. However, during the last decades new technologies have emerged, which seem to be better suited for co-located collaboration than personal computers. Large interactive displays, for example, allow a number of people to work simultaneously on the same surface while being highly aware of the co-learners' actions. There are also multi-display environments that provide several workspaces, some of which may be shared, others may be personal. However, there is a lack of knowledge regarding the influence of different display types on group processes. For instance, it remains unclear in which cases shareable user interfaces should replace traditional single-user devices and when both personal and shared workspaces should be provided. This dissertation therefore explores the role of personal and shared workspaces in various situations in the area of collaborative learning. The research questions include the choice of technological devices, the seating arrangement as well as how user interfaces can be designed to guide learners. To investigate these questions a two-fold approach was chosen. First, a framework was developed, which supports the implementation of scripted collaborative learning applications. Second, different prototypes were implemented to explore the research questions. Each prototype is based on at least one collaboration script. The result is a set of studies, which contribute to answering the above-mentioned research questions. With regard to the choice of display environment the studies showed several reasons for integrating personal devices such as laptops. Pure tabletop applications with around-the-table seating arrangements whose benefits for collaboration are widely discussed in the relevant literature revealed severe drawbacks for text-based learning activities. The combination of laptops and an interactive wall display, on the other hand, turned out to be a suitable display environment for collaborative learning in several cases. In addition, the thesis presents several ways of designing the user interface in a way that guides learners through collaboration scripts
    • 

    corecore