145,133 research outputs found

    Direct and Indirect Multi-Touch Interaction on a Wall Display

    Get PDF
    National audienceMulti-touch wall displays allow to take advantage of co-located interaction (direct interaction) on very large surfaces. However interacting with content beyond arms' reach requires body movements, introducing fatigue and impacting performance. Interacting with distant content using a pointer can alleviate these problems but introduces legibility issues and loses the benefits of multi-touch interaction. We introduce WallPad, a widget designed to quickly access remote content on wall displays while addressing legibility issues and supporting direct multi-touch interaction. After briefly describing how we supported multi-touch interaction on a wall display, we present the WallPad widget and explain how it supports direct, indirect and de-localized direct interaction

    Designing interaction for a multi-touch wall

    Get PDF
    As large-scale display and multi-touch technologies become more affordable, the market has seen the development of multi-touch walls. This new medium offers a unique mix of information density, direct interactivity and collaboration support, and the new features have radical effects on interaction design. Here we explore some research issues together with proposed solutions and some design suggestions, based on our own approach to three areas of interaction design: multi-touch input, user interface and co-located collaboration

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenĂŒber neuartigen InteraktionsmodalitĂ€ten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der UnterstĂŒtzung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer GrĂ¶ĂŸe sind Wanddisplays fĂŒr die Interaktion mit mehreren Benutzern prĂ€destiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten AnwendungsfĂ€lle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe ĂŒber ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, mĂŒssen hierfĂŒr Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur VerfĂŒgung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der NĂ€he (mit Touch als EingabemodalitĂ€t) als auch in etwas weiterer Entfernung (unter Nutzung zusĂ€tzlicher mobiler GerĂ€te). Grundlage fĂŒr personalisierte Mehrbenutzerinteraktion sind technische Lösungen fĂŒr die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden MobilgerĂ€te anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstĂŒtzen. Diese nutzen zusĂ€tzliche MobilgerĂ€te, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und InteraktionsmodalitĂ€ten fĂŒr personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschĂ€ftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit fĂŒr Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    NOVEL INTERACTION TECHNIQUES FOR COLLABORATING ON WALL-SIZED DISPLAYS.

    Get PDF
    poster abstractPerforming and collaborating on information-intensive tasks - like review-ing and analyzing multiple charts - is an essential, but currently difficult, ac-tivity in desktop environments. The problem is the low resolution of the dis-play that forces users to visualize only few pieces of information concurrent-ly, and to switch focus very frequently. To facilitate productivity and collabo-rative decision-making, teams of users are increasingly adopting wall-sized interactive displays. Yet, to harness the full potential of these devices, it is critical to understand how to best support inter-member cognition and navi-gation in such large information spaces. To navigate information, the wall-display’s overwhelming size (often 18 X 6 feet) make existing desktop-driven interaction and organization techniques (like “point-and-click” and “taskbar”) extremely inefficient. Also, with time, users get exhausted walk-ing to reach different elements spread over the wall-display. Moreover, being aware of the collaborative events happening around the display, while work-ing on it, often exceeds users’ cognitive capacity. To address these limita-tions, we are investigating four novel interaction techniques for wall-display user experiences. “Timeline” allows browsing large collections of elements over time, while or after collaborative work; “Cabinet” supports temporary storage and effortless retrieval of displayed elements; “Magnet” enables us-ers to virtually reach remote objects on the wall display; “In-focus” allows facilitated and non-intrusive awareness of members’ interaction. We are planning to prototype and evaluate these techniques using off-the-shelf in-put modalities such as multi-touch gesture and mid-air gesture, as well as software and wall-sized displays made available by the University Infor-mation Technology Services (UITS) at IUPUI. In our evaluation with users, we hypothesize that, with respect to desktop interaction techniques, the proposed techniques will increase efficiency in navigation and information organization tasks, reduce perceived cognitive load, while at the same time engender better collaboration and decision-making

    Establishing the design knowledge for emerging interaction platforms

    Get PDF
    While awaiting a variety of innovative interactive products and services to appear in the market in the near future such as interactive tabletops, interactive TVs, public multi-touch walls, and other embedded appliances, this paper calls for preparation for the arrival of such interactive platforms based on their interactivity. We advocate studying, understanding and establishing the foundation for interaction characteristics and affordances and design implications for these platforms which we know will soon emerge and penetrate our everyday lives. We review some of the archetypal interaction platform categories of the future and highlight the current status of the design knowledge-base accumulated to date and the current rate of growth for each of these. We use example designs illustrating design issues and considerations based on the authors’ 12-year experience in pioneering novel applications in various forms and styles

    RealTimeChess: Lessons from a Participatory Design Process for a Collaborative Multi-Touch, Multi-User Game

    Get PDF
    We report on a long-term participatory design process during which we designed and improved RealTimeChess, a collaborative but competitive game that is played using touch input by multiple people on a tabletop display. During the design process we integrated concurrent input from all players and pace control, allowing us to steer the interaction along a continuum between high-paced simultaneous and low-paced turn-based gameplay. In addition, we integrated tutorials for teaching interaction techniques, mechanisms to control territoriality, remote interaction, and alert feedback. Integrating these mechanism during the participatory design process allowed us to examine their effects in detail, revealing for instance effects of the competitive setting on the perception of awareness as well as territoriality. More generally, the resulting application provided us with a testbed to study interaction on shared tabletop surfaces and yielded insights important for other time-critical or attention-demanding applications.

    The effects of room design on computer-supported collaborative learning in a multi-touch classroom.

    Get PDF
    While research indicates that technology can be useful for supporting learning and collaboration, there is still relatively little uptake or widespread implementation of these technologies in classrooms. In this paper, we explore one aspect of the development of a multi-touch classroom, looking at two different designs of the classroom environment to explore how classroom layout may influence group interaction and learning. Three classes of students working in groups of four were taught in the traditional forward-facing room condition, while three classes worked in a centered room condition. Our results indicate that while the outcomes on tasks were similar across conditions, groups engaged in more talk (but not more off-task talk) in a centered room layout, than in a traditional forward-facing room. These results suggest that the use of technology in the classroom may be influenced by the location of the technology, both in terms of the learning outcomes and the interaction behaviors of students. The findings highlight the importance of considering the learning environment when designing technology to support learning, and ensuring that integration of technology into formal learning environments is done with attention to how the technology may disrupt, or contribute to, the classroom interaction practices

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    ImageSpirit: Verbal Guided Image Parsing

    Get PDF
    Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit
    • 

    corecore