222 research outputs found

    Hand Occlusion on a Multi-Touch Tabletop

    Get PDF
    International audienceWe examine the shape of hand and forearm occlusion on a multi-touch table for different touch contact types and tasks. Individuals have characteristic occlusion shapes, but with commonalities across tasks, postures, and handedness. Based on this, we create templates for designers to justify occlusion-related decisions and we propose geometric models capturing the shape of occlusion. A model using diffused illumination captures performed well when augmented with a forearm rectangle, as did a modified circle and rectangle model with ellipse "fingers" suitable when only X-Y contact positions are available. Finally, we describe the corpus of detailed multi-touch input data we generated which is available to the community

    Physical Interaction Concepts for Knowledge Work Practices

    Get PDF
    The majority of workplaces in developed countries concern knowledge work. Accordingly, the IT industry and research made great efforts for many years to support knowledge workers -- and indeed, computer-based information workplaces have come of age. Nevertheless, knowledge work in the physical world has still quite a number of unique advantages, and the integration of physical and digital knowledge work leaves a lot to be desired. The present thesis aims at reducing these deficiencies; thereby, it leverages late technology trends, in particular interactive tabletops and resizable hand-held displays. We start from the observation that knowledge workers develop highly efficient practices, skills, and dexterity of working with physical objects in the real world, whether content-unrelated (coffee mugs, stationery etc.) or content-related (books, notepads etc.). Among the latter, paper-based objects -- the notorious analog information bearers -- represent by far the most relevant (super-) category. We discern two kinds of practices: collective practices concern the arrangement of objects with respect to other objects and the desk, while specific practices operate on individual objects and usually alter them. The former are mainly employed for an effective management of the physical desktop workspace -- e.g., everyday objects are frequently moved on tables to optimize the desk as a workplace -- or an effective organization of paper-based documents on the desktop -- e.g., stacking, fanning out, sorting etc. The latter concern the specific manipulation of physical objects related to the task at hand, i.e. knowledge work. Widespread assimilated practices concern not only writing on, annotating, or spatially arranging paper documents but also sophisticated manipulations -- such as flipping, folding, bending, etc. Compared to the wealth of such well-established practices in the real world, those for digital knowledge work are bound by the indirection imposed by mouse and keyboard input, where the mouse provided such a great advancement that researchers were seduced to calling its use "direct manipulation". In this light, the goal of this thesis can be rephrased as exploring novel interaction concepts for knowledge workers that i) exploit the flexible and direct manipulation potential of physical objects (as present in the real world) for more intuitive and expressive interaction with digital content, and ii) improve the integration of the physical and digital knowledge workplace. Thereby, two directions of research are pursued. Firstly, the thesis investigates the collective practices executed on the desks of knowledge workers, thereby discerning content-related (more precisely, paper-based documents) and content-unrelated object -- this part is coined as table-centric approaches and leverages the technology of interactive tabletops. Secondly, the thesis looks at specific practices executed on paper, obviously concentrating on knowledge related tasks due to the specific role of paper -- this part is coined as paper-centric approaches and leverages the affordances of paper-like displays, more precisely of resizable i.e. rollable and foldable displays. The table-centric approach leads to the challenge of blending interactive tabletop technology with the established use of physical desktop workspaces. We first conduct an exploratory user study to investigate behavioral and usage patterns of interaction with both physical and digital documents on tabletop surfaces while performing tasks such as grouping and browsing. Based on results of the study, we contribute two sets of interaction and visualization concepts -- coined as PaperTop and ObjecTop -- that concern specific paper based practices and collective practices, respectively. Their efficiency and effectiveness are evaluated in a series of user studies. As mentioned, the paper-centric perspective leverages late ultra-thin resizable display technology. We contribute two sets of novel interaction concepts again -- coined as FoldMe and Xpaaand -- that respond to the design space of dual-sided foldable and of rollout displays, respectively. In their design, we leverage the physical act of resizing not "just" for adjusting the screen real estate but also for interactively performing operations. Initial user studies show a great potential for interaction with digital contents, i.e. for knowledge work

    Improving Multi-Touch Interactions Using Hands as Landmarks

    Get PDF
    Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems

    TangiWheel: A widget for manipulating collections on tabletop displays supporting hybrid Input modality

    Full text link
    In this paper we present TangiWheel, a collection manipulation widget for tabletop displays. Our implementation is flexible, allowing either multi-touch or interaction, or even a hybrid scheme to better suit user choice and convenience. Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation. The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles. Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g., involving a typical master-slave exploration pattern). The results show that tangibles perform significantly better than fingers, despite dealing with a greater number of interactions, in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation. However, when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks, for instance when collections are fixed in the interface layout, touch input is significantly better in terms of required time and number of actions. Finally, when a more elastic collection layout or more complex additional insertion or displacement operations are needed, the hybrid and tangible approaches clearly outperform finger-based interactions.. ©2012 Springer Science+Business Media, LLC & Science Press, ChinaThe work is supported by the Ministry of Education of Spain under Grant No. TSI2010-20488. Alejandro Catala is supported by an FPU fellowship for pre-doctoral research staff training granted by the Ministry of Education of Spain with reference AP2006-00181.Catalá Bolós, A.; García Sanjuan, F.; Jaén Martínez, FJ.; Mocholi Agües, JA. (2012). TangiWheel: A widget for manipulating collections on tabletop displays supporting hybrid Input modality. Journal of Computer Science and Technology. 27(4):811-829. doi:10.1007/s11390-012-1266-4S811829274Jordà S, Geiger G, Alonso M, Kaltenbrunner M. The reacTable: Exploring the synergy between live music performance and tabletop tangible interfaces. In Proc. TEI 2007, Baton Rouge, LA, USA, Feb. 15-17, 2007, pp.139–146.Vandoren P, van Laerhoven T, Claesen L, Taelman J, Raymaekers C, van Reeth F. IntuPaint: Bridging the gap between physical and digital painting. In Proc. TABLETOP2008, Amterdam, the Netherlands, Oct. 1-3, 2008, pp.65–72.Schöning J, Hecht B, Raubal M, Krüger A, Marsh M, Rohs M. Improving interaction with virtual globes through spatial thinking: Helping users ask “why?”. In Proc. IUI 2008, Canary Islans, Spain, Jan. 13-16, 2008, pp.129–138.Fitzmaurice GW, BuxtonW. An empirical evaluation of graspable user interfaces: Towards specialized, space-multiplexed input. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.43–50.Tuddenham P, Kirk D, Izadi S. Graspables revisited: Multitouch vs. tangible input for tabletop displays in acquisition and manipulation tasks. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.2223–2232.Lucchi A, Jermann P, Zufferey G, Dillenbourg P. An empirical evaluation of touch and tangible interfaces for tabletop displays. In Proc. TEI 2010, Cambridge, USA, Jan. 25-27, 2010, pp.177–184.Fitzmaurice G W, Ishii H, Buxton W. Bricks: Laying the foundations for graspable user interfaces. In Proc. CHI 1995, Denver, USA, May 7-11, 1995, pp.442–449.Ishii H, Ullmer B. Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.234–241.Ullmer B, Ishii H, Glas D. mediaBlocks: Physical containers, transports, and controls for online media. In Proc. SIGGRAPH1998, Orlando, USA, July 19-24, 1998, pp.379–386.Shen C, Hancock M S, Forlines C, Vernier F D. CoR2Ds: Context-rooted rotatable draggables for tabletop interaction. In Proc. CHI 2005, Portland, USA, April 2-7, 2005, pp.1781–1784.Lepinski G J, Grossman T, Fitzmaurice G. The design and evaluation of multitouch marking menus. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.2233–2242.Accot J, Zhai S. Beyond Fitts’ law: Models for trajectorybased HCI tasks. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.295–302.Song H, Kim B, Lee B, Seo J. A comparative evaluation on tree visualization methods for hierarchical structures with large fan-outs. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.223–232.Bailly G, Lecolinet E, Nigay L. Wave menus: Improving the novice mode of hierarchical marking menus. In Proc. INTERACT2007, Río de Janeiro, Brazil, Sept. 10-14, 2007, pp.475–488.Zhao S, Agrawala M, Hinckley K. Zone and polygon menus: Using relative position to increase the breadth of multi-stroke marking menus. In Proc. CHI 2006, Montreal, Canada, April 24-27, 2006, pp.1077–1086.Patten J, Recht B, Ishii H. Interaction techniques for musical performance with tabletop tangible interfaces. In Proc. ACE2006, Hollywood, USA, Jun. 14-16, 2006, Article No.27.Weiss M, Wagner J, Jansen Y, Jennings R, Khoshabeh R, Hollan J D, Borchers J. SLAP widgets: Bridging the gap between virtual and physical controls on tabletops. In Proc. CHI 2009, Boston, USA, April 4-9, 2009, pp.481–490.Hancock M, Hilliges O, Collins C, Baur D, Carpendale S. Exploring tangible and direct touch interfaces for manipulating 2D and 3D information on a digital table. In Proc. ITS 2009, Banff, Canada, Nov. 23-25, pp.77–84.Hilliges O, Baur D, Butz A. Photohelix: Browsing, sorting and sharing digital photo collections. In Proc. Horizontal Interactive Human-Computer Systems (TABLETOP2007), Newport, Rhode Island, USA, Oct. 10-12, 2007, pp.87–94.Hesselmann T, Flöring S, Schmidt M. Stacked half-Pie menus: Navigating nested menus on interactive tabletops. In Proc. ITS 2009, Banff, Canada, Nov. 23-25, 2009, pp.173–180.Gallardo D, Jordà S. Tangible jukebox: Back to palpable music. In Proc. TEI 2010, Boston, USA, Jan. 25-27, 2010, pp.199–202.Fishkin K. A taxonomy for and analysis of tangible interfaces. Personal and Ubiquitous Computing, 2004, 8(5): 347–358.Catala A, Jaen J, Martinez-Villaronga A A, Mocholi J A. AGORAS: Exploring creative learning on tangible user interfaces. In Proc. COMPSAC 2011, Munich, Germany, July 18-22, 2011, pp.326–335.Catala A, Garcia-Sanjuan F, Azorin J, Jaen J, Mocholi J A. Exploring direct communication and manipulation on interactive surfaces to foster novelty in a creative learning environment. IJCSRA, 2012, 2(1): 15–24.Catala A, Jaen J, van Dijk B, Jord`a S. Exploring tabletops as an effective tool to foster creativity traits. In Proc. TEI 2012, Kingston, Canada, Feb. 19-22, 2012, pp.143–150.Hopkins D. Directional selection is easy as pie menus. In: The Usenix Association Newsletter, 1987, 12(5): 103.Microsoft Surface User Experience Guidelines. http://msdn.microsoft.com/en-us/library/ff318692.aspx , May 2011.Maydak M, Stromer R, Mackay H A, Stoddard L T. Stimulus classes in matching to sample and sequence production: The emergence of numeric relations. Research in Developmental Disabilities, 1995, 16(3): 179–204

    Improving the effectiveness of interactive data analytics with phone-tablet combinations

    Get PDF
    Smartphones and tablet computer are ubiquitous in daily life. Many people carrying smartphones and tablet computers with them simultaneously. The multiplicity of different sized devices indicates the conflict between the maximal interaction space and a minimal bulkiness of the devices. This dissertation we extend the interaction space of mobile devices by adding mutual-spatial awareness to ordinary devices. By combining multiple mobile devices and using relative device placement as an additional input source we designed a mobile tabletop system for ad-hoc collaboration. With this setting we aimed to emulate the concept of so-called interactive tablecloth, which envisages every surface of a table top will become an interactive surface. To evaluate the concept we designed and implemented a working prototype, called MochaTop. To provide the mutual-spatial awareness we placed the mobile devices on an interactive table. For the future we believe in possibilities to replace the interactive table by technology integrated in the mobile device. In this study we used both one Android smartphone and one Android tablet as mobile devices. To track the position of the devices we used one Microsoft Surface2 (SUR40). The system is designed for exploring multimedia information and visual data representations by manipulating the position of two mobile devices on a horizontal surface. We present possible use-cases and environments. In a second step we discuss multiple low fidelity prototypes. The results are integrated in the development of MochaTop. The application MochaTop is designed as an example for exploring digital information. To influence the participants not too much by the content, we choose a common topic to present in MochaTop: coffee production and trade. We present the implementation of MochaTop and the conducted user study with 23 participants. Overall we could awaken interest for future systems by the study-participants and show that the system supports knowledge transfer. Furthermore we were able to identify design challenges for future development of mobile tabletops. These challenges concern mostly input feedback, interaction zones and three dimensional input.Smartphones und Tablet-Computer sind Teil unseres täglichen Lebens. Viele Menschen tragen sowohl Smartphone als auch Tablet-Computer ständig bei sich. Die Vielfalt an unterschiedlich großen Smartphones und Tablet-Computern zeigt einen Interessenskonflikt auf: Einerseits sollen mobile Geräte eine maximal große Interaktionsfläche bieten. Andererseits sollen die Geräte möglichst wenig sperrig sein. In dieser Studienarbeit wird der Interaktionsraum von mobilen Geräten durch gegenseitige räumliche Lage Wahrnehmung erweitert. Durch die Kombination von mehreren mobilen Geräten und der Nutzung von relativen Geräte-Positionen als zusätzliche Eingabemethode, gestalten wir ein mobiles Tabletop System für ad-hoc Zusammenarbeit. Somit emulieren wir das Konzept "interactive tablecloth", welches hervorsagt, dass sich alle Tische und Oberflächen zu digitalen Interaktionsflächen verwandeln werden. Um unser Konzept zu evaluieren entworfen und implementierten wir einen lauffähigen Prototype, genannt MochaTop. Um die gegenseitige räumliche Lage Wahrnehmung der mobil Geräte nutzen zu können, platzierten wir diese auf einem interaktiven Tisch. Für die Zukunft gehen wir davon aus, dass sich entsprechende Sensoren leicht in Smartphones und Tablet-Computer integrieren lassen. In dieser Arbeit verwenden wir sowohl Android Smartphones als auch Android Tablet-Computer. Um die Position des Smartphones und des Tablet-Computers zu ermitteln nutzen wir einen Microsoft Surface2 (SUR40). Das System ist entworfen um multimediale Informationen und graphische Datenrepräsentationen durch Positionsveränderung zweier Geräte zu erforschen. Wir stellen verschiedene Use-Cases und Einsatzumgebungen vor. In einem zweiten Schritt diskutieren wir verschiedene Prototypen. Diese Ergebnisse fließen anschließend in die Entwicklung von MochaTop ein. Die Anwendung MochaTop ist eine beispielhafter Prototype, um digitalen Inhalt erfahrbar zu machen. Um die Studienteilnehmer nicht zu sehr durch den präsentierten Inhalt zu beeinflussen, präsentieren wir in MochaTop ein alltägliches Thema: Kaffeeproduktion und -Handel. In dieser Arbeit stellen wir die Implantierung von MochaTop sowie die anschließende Benutzerstudie vor. Die Benutzerstudie führten wir mit 23 Probanden durch um unser System zu analysieren. Insgesamt stellten wir Interesse der Teilnehmer an den getesteten Techniken fest und konnten zeigen, dass unser System einen positiven Einfluss auf die Wissensvermittlung hat. Darüber hinaus konnten wir verschiedene Herausforderungen für weitere Entwicklungen identifizieren. Diese betreffen hauptsächlich das Eingabefeedback, interaktive Zonen und drei dimensionale Eingaben

    Interactive Visualization Lenses:: Natural Magic Lens Interaction for Graph Visualization

    Get PDF
    Information visualization is an important research field concerned with making sense and inferring knowledge from data collections. Graph visualizations are specific techniques for data representation relevant in diverse application domains among them biology, software-engineering, and business finance. These data visualizations benefit from the display space provided by novel interactive large display environments. However, these environments also cause new challenges and result in new requirements regarding the need for interaction beyond the desktop and according redesign of analysis tools. This thesis focuses on interactive magic lenses, specialized locally applied tools that temporarily manipulate the visualization. These may include magnification of focus regions but also more graph-specific functions such as pulling in neighboring nodes or locally reducing edge clutter. Up to now, these lenses have mostly been used as single-user, single-purpose tools operated by mouse and keyboard. This dissertation presents the extension of magic lenses both in terms of function as well as interaction for large vertical displays. In particular, this thesis contributes several natural interaction designs with magic lenses for the exploration of graph data in node-link visualizations using diverse interaction modalities. This development incorporates flexible switches between lens functions, adjustment of individual lens properties and function parameters, as well as the combination of lenses. It proposes interaction techniques for fluent multi-touch manipulation of lenses, controlling lenses using mobile devices in front of large displays, and a novel concept of body-controlled magic lenses. Functional extensions in addition to these interaction techniques convert the lenses to user-configurable, personal territories with use of alternative interaction styles. To create the foundation for this extension, the dissertation incorporates a comprehensive design space of magic lenses, their function, parameters, and interactions. Additionally, it provides a discussion on increased embodiment in tool and controller design, contributing insights into user position and movement in front of large vertical displays as a result of empirical investigations and evaluations.Informationsvisualisierung ist ein wichtiges Forschungsfeld, das das Analysieren von Daten unterstützt. Graph-Visualisierungen sind dabei eine spezielle Variante der Datenrepräsentation, deren Nutzen in vielerlei Anwendungsfällen zum Einsatz kommt, u.a. in der Biologie, Softwareentwicklung und Finanzwirtschaft. Diese Datendarstellungen profitieren besonders von großen Displays in neuen Displayumgebungen. Jedoch bringen diese Umgebungen auch neue Herausforderungen mit sich und stellen Anforderungen an Nutzerschnittstellen jenseits der traditionellen Ansätze, die dadurch auch Anpassungen von Analysewerkzeugen erfordern. Diese Dissertation befasst sich mit interaktiven „Magischen Linsen“, spezielle lokal-angewandte Werkzeuge, die temporär die Visualisierung zur Analyse manipulieren. Dabei existieren zum Beispiel Vergrößerungslinsen, aber auch Graph-spezifische Manipulationen, wie das Anziehen von Nachbarknoten oder das Reduzieren von Kantenüberlappungen im lokalen Bereich. Bisher wurden diese Linsen vor allem als Werkzeug für einzelne Nutzer mit sehr spezialisiertem Effekt eingesetzt und per Maus und Tastatur bedient. Die vorliegende Doktorarbeit präsentiert die Erweiterung dieser magischen Linsen, sowohl in Bezug auf die Funktionalität als auch für die Interaktion an großen, vertikalen Displays. Insbesondere trägt diese Dissertation dazu bei, die Exploration von Graphen mit magischen Linsen durch natürliche Interaktion mit unterschiedlichen Modalitäten zu unterstützen. Dabei werden flexible Änderungen der Linsenfunktion, Anpassungen von individuellen Linseneigenschaften und Funktionsparametern, sowie die Kombination unterschiedlicher Linsen ermöglicht. Es werden Interaktionstechniken für die natürliche Manipulation der Linsen durch Multitouch-Interaktion, sowie das Kontrollieren von Linsen durch Mobilgeräte vor einer Displaywand vorgestellt. Außerdem wurde ein neuartiges Konzept körpergesteuerter magischer Linsen entwickelt. Funktionale Erweiterungen in Kombination mit diesen Interaktionskonzepten machen die Linse zu einem vom Nutzer einstellbaren, persönlichen Arbeitsbereich, der zudem alternative Interaktionsstile erlaubt. Als Grundlage für diese Erweiterungen stellt die Dissertation eine umfangreiche analytische Kategorisierung bisheriger Forschungsarbeiten zu magischen Linsen vor, in der Funktionen, Parameter und Interaktion mit Linsen eingeordnet werden. Zusätzlich macht die Arbeit Vor- und Nachteile körpernaher Interaktion für Werkzeuge bzw. ihre Steuerung zum Thema und diskutiert dabei Nutzerposition und -bewegung an großen Displaywänden belegt durch empirische Nutzerstudien

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent

    Gaze-touch: combining gaze with multi-touch for interaction on the same surface

    Get PDF
    Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of ''gaze selects, touch manipulates''. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch
    corecore