730 research outputs found

    GART: The Gesture and Activity Recognition Toolkit

    Get PDF
    Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is a user interface toolkit designed to enable the development of gesture-based applications. GART provides an abstraction to machine learning algorithms suitable for modeling and recognizing different types of gestures. The toolkit also provides support for the data collection and the training process. In this paper, we present GART and its machine learning abstractions. Furthermore, we detail the components of the toolkit and present two example gesture recognition applications

    Feet movement in desktop 3D interaction

    Get PDF
    In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users' motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs

    Interactive Visualization Lenses:: Natural Magic Lens Interaction for Graph Visualization

    Get PDF
    Information visualization is an important research field concerned with making sense and inferring knowledge from data collections. Graph visualizations are specific techniques for data representation relevant in diverse application domains among them biology, software-engineering, and business finance. These data visualizations benefit from the display space provided by novel interactive large display environments. However, these environments also cause new challenges and result in new requirements regarding the need for interaction beyond the desktop and according redesign of analysis tools. This thesis focuses on interactive magic lenses, specialized locally applied tools that temporarily manipulate the visualization. These may include magnification of focus regions but also more graph-specific functions such as pulling in neighboring nodes or locally reducing edge clutter. Up to now, these lenses have mostly been used as single-user, single-purpose tools operated by mouse and keyboard. This dissertation presents the extension of magic lenses both in terms of function as well as interaction for large vertical displays. In particular, this thesis contributes several natural interaction designs with magic lenses for the exploration of graph data in node-link visualizations using diverse interaction modalities. This development incorporates flexible switches between lens functions, adjustment of individual lens properties and function parameters, as well as the combination of lenses. It proposes interaction techniques for fluent multi-touch manipulation of lenses, controlling lenses using mobile devices in front of large displays, and a novel concept of body-controlled magic lenses. Functional extensions in addition to these interaction techniques convert the lenses to user-configurable, personal territories with use of alternative interaction styles. To create the foundation for this extension, the dissertation incorporates a comprehensive design space of magic lenses, their function, parameters, and interactions. Additionally, it provides a discussion on increased embodiment in tool and controller design, contributing insights into user position and movement in front of large vertical displays as a result of empirical investigations and evaluations.Informationsvisualisierung ist ein wichtiges Forschungsfeld, das das Analysieren von Daten unterstützt. Graph-Visualisierungen sind dabei eine spezielle Variante der Datenrepräsentation, deren Nutzen in vielerlei Anwendungsfällen zum Einsatz kommt, u.a. in der Biologie, Softwareentwicklung und Finanzwirtschaft. Diese Datendarstellungen profitieren besonders von großen Displays in neuen Displayumgebungen. Jedoch bringen diese Umgebungen auch neue Herausforderungen mit sich und stellen Anforderungen an Nutzerschnittstellen jenseits der traditionellen Ansätze, die dadurch auch Anpassungen von Analysewerkzeugen erfordern. Diese Dissertation befasst sich mit interaktiven „Magischen Linsen“, spezielle lokal-angewandte Werkzeuge, die temporär die Visualisierung zur Analyse manipulieren. Dabei existieren zum Beispiel Vergrößerungslinsen, aber auch Graph-spezifische Manipulationen, wie das Anziehen von Nachbarknoten oder das Reduzieren von Kantenüberlappungen im lokalen Bereich. Bisher wurden diese Linsen vor allem als Werkzeug für einzelne Nutzer mit sehr spezialisiertem Effekt eingesetzt und per Maus und Tastatur bedient. Die vorliegende Doktorarbeit präsentiert die Erweiterung dieser magischen Linsen, sowohl in Bezug auf die Funktionalität als auch für die Interaktion an großen, vertikalen Displays. Insbesondere trägt diese Dissertation dazu bei, die Exploration von Graphen mit magischen Linsen durch natürliche Interaktion mit unterschiedlichen Modalitäten zu unterstützen. Dabei werden flexible Änderungen der Linsenfunktion, Anpassungen von individuellen Linseneigenschaften und Funktionsparametern, sowie die Kombination unterschiedlicher Linsen ermöglicht. Es werden Interaktionstechniken für die natürliche Manipulation der Linsen durch Multitouch-Interaktion, sowie das Kontrollieren von Linsen durch Mobilgeräte vor einer Displaywand vorgestellt. Außerdem wurde ein neuartiges Konzept körpergesteuerter magischer Linsen entwickelt. Funktionale Erweiterungen in Kombination mit diesen Interaktionskonzepten machen die Linse zu einem vom Nutzer einstellbaren, persönlichen Arbeitsbereich, der zudem alternative Interaktionsstile erlaubt. Als Grundlage für diese Erweiterungen stellt die Dissertation eine umfangreiche analytische Kategorisierung bisheriger Forschungsarbeiten zu magischen Linsen vor, in der Funktionen, Parameter und Interaktion mit Linsen eingeordnet werden. Zusätzlich macht die Arbeit Vor- und Nachteile körpernaher Interaktion für Werkzeuge bzw. ihre Steuerung zum Thema und diskutiert dabei Nutzerposition und -bewegung an großen Displaywänden belegt durch empirische Nutzerstudien

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    The feet in human--computer interaction: a survey of foot-based interaction

    Get PDF
    Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    BendableSound: An Elastic Multisensory Surface Using Touch-based interactions to Assist Children with Severe Autism During Music Therapy

    Get PDF
    Neurological Music Therapy uses live music to improve the sensorimotor regulation of children with severe autism. However, they often lack musical training and their impairments limit their interactions with musical instruments. In this paper, we present our co-design work that led to the BendableSound prototype: an elastic multisensory surface encouraging users to practice coordination movements when touching a fabric to play sounds. We present the results of a formative study conducted with 18 teachers showing BendableSound was perceived as “usable” and “attractive”. Then, we present a deployment study with 24 children with severe autism showing BendableSound is “easy to use” and may potentially have therapeutic benefits regarding attention and motor development. We propose a set of design insights that could guide the design of natural user interfaces, particularly elastic multisensory surfaces. We close with a discussion and directions for future work

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent

    Designing for Shareable Interfaces in the Wild

    Get PDF
    Despite excitement about the potential of interactive tabletops to support collaborative work, there have been few empirical demonstrations of their effectiveness (Marshall et al., 2011). In particular, while lab-based studies have explored the effects of individual design features, there has been a dearth of studies evaluating the success of systems in the wild. For this technology to be of value, designers and systems builders require a better understanding of how to develop and evaluate tabletop applications to be deployed in real world settings. This dissertation reports on two systems designed through a process that incorporated ethnography-style observations, iterative design and in the wild evaluation. The first study focused on collaborative learning in a medical setting. To address the fact that visitors to a hospital emergency ward were leaving with an incomplete understanding of their diagnosis and treatment, a system was prototyped in a working Emergency Room (ER) with doctors and patients. The system was found to be helpful but adoption issues hampered its impact. The second study focused on a planning application for visitors to a tourist information centre. Issues and opportunities for a successful, contextually-fitted system were addressed and it was found to be effective in supporting group planning activities by novice users, in particular, facilitating users’ first experiences, providing effective signage and offering assistance to guide the user through the application. This dissertation contributes to understanding of multi-user systems through literature review of tabletop systems, collaborative tasks, design frameworks and evaluation of prototypes. Some support was found for the claim that tabletops are a useful technology for collaboration, and several issues were discussed. Contributions to understanding in this field are delivered through design guidelines, heuristics, frameworks, and recommendations, in addition to the two case studies to help guide future tabletop system creators
    corecore