45 research outputs found
Accessible On-Body Interaction for People With Visual Impairments
While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the usersâ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash).
To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the userâs own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free usersâ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device.
Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users.
The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments
Recommended from our members
An investigation of mid-air gesture interaction for older adults
Older adults (60+) face natural and gradual decline in cognitive, sensory and motor functions that are often the reason for the difficulties that older users come up against when interacting with computers. For that reason, the investigation and design of age-inclusive input methods for computer interaction is much needed and relevant due to an ageing population. The advances of motion sensing technologies and mid-air gesture interaction reinvented how individuals can interact with computer interfaces and this modality of input method is often deemed as a more ânaturalâ and âintuitiveâ than using purely traditional input devices such mouse interaction. Although explored in gaming and entertainment, the suitability of mid-air gesture interaction for older users in particular is still little known. The purpose of this research is to investigate the potential of mid-air gesture interaction to facilitate computer use for older users, and to address the challenges that older adults may face when interacting with gestures in mid-air. This doctoral research is presented as a collection of papers that, together, develop the topic of ageing and computer interaction through mid-air gestures. The initial point for this research was to establish how older users differ from younger users and focus on the challenges faced by older adults when interacting with mid-air gesture interaction. Once these challenges were identified, this work aimed to explore a series of usability challenges and opportunities to further develop age-inclusive interfaces based on mid-air gesture interaction. Through a series of empirical studies, this research intends to provide recommendations for designing mid-air gesture interaction that better take into consideration the needs and skills of the older population and aims to contribute to the advance of age-friendly interfaces
Recommended from our members
Making the Unconscious Unconscious : Reclaiming Microinteractions for People with Motor Disabilities
Numerous diseases and injuries can limit a person's ability to perform everyday tasks -- things like getting dressed, bathing, and eating. Anything that requires physical activity can be affected; even simple things like turning on the lights can become difficult or impossible. Until recently, the only way for a person with severe motor disabilities to perform any of these tasks was through a human caregiver. Assistive technology and automation have begun to take over some of these functions, but still impose many constraints, both in the tasks which can be performed, and in the operator interfaces for these tasks, which can impose significant overhead on even the simplest of interactions. The problems are particularly acute when considering microinteractions - short interactions with a device or control which, for normally-abled persons, frequently require little or no thought on the part of the person performing the task.
The goal of this dissertation is to improve quality of life for people with severe motor disabilities by using robot assistants and assistive technology to expand the set of tasks they can perform for themselves, focusing on normally unconscious tasks which are currently decidedly conscious when using existing interfaces to assistive technology. Doing so requires ensuring that the interfaces to these tasks are appropriate, intuitive, and efficient for the particular task. With these improvements, we have endeavored to bring the effort required for common microinteractions from being on the same level as any other task back to being almost unconscious to perform. To do this, we have characterized resource deficiencies caused by disability so that we can making up for them in the design of interfaces and automation technology, specifically leveraging environmental context, and by using the environment itself as a canvas for interfaces when appropriate. These techniques are wrapped up in a case study of microinteraction-optimized interfaces designed for a person with ALS in his home using data collected over the course of several months
Peripheral interaction
In our everyday life we carry out a multitude of activities in parallel without focusing our attention explicitly on them. We drink a cup of tea while reading a book, we signal a colleague passing by with a hand gesture, that we are concentrated right now and that he should wait one moment, or we walk a few steps backwards while taking photos. Many of these interactions - like drinking, sending signals via gestures or walking - are rather complex by themselves. By means of learning and training, however, these interactions become part of our routines and habits and therefore only consume little or no attentional resources. In contrast, when interacting with digital devices, we are often asked for our full attention. To carry out - even small and marginal tasks - we are regularly forced to switch windows, do precise interactions (e.g., pointing with the mouse) and thereby these systems trigger context and focus switches, disrupting us in our main focus and task. Peripheral interaction aims at making use of human capabilities and senses like divided attention, spatial memory and proprioception to support interaction with digital devices in the periphery of the attention, consequently quasi-parallel to another primary task.
In this thesis we investigate peripheral interaction in the context of a standard desktop computer environment. We explore three interaction styles for peripheral interaction: graspable interaction, touch input and freehand gestures. StaTube investigates graspable interaction in the domain of instant messaging, while the Appointment Projection uses simple wiping gestures to access information about upcoming appointments. These two explorations focus on one interaction style each and offer first insights into the general benefits of peripheral interaction. In the following we carried out two studies comparing all three interaction styles (graspable, touch, freehand) for audio player control and for dealing with notifications. We found that all three interaction styles are generally fit for peripheral interaction but come with different advantages and disadvantages. The last set of explorative studies deals with the ability to recall spatial locations in 2D as well as 3D. The Unadorned Desk makes use of the physical space around the desktop computer and thereby offers an extended interaction space to store and retrieve virtual items such as commands, applications or tools. Finally, evaluation of peripheral interaction is not straightforward as the systems are designed to blend into the environment and not draw attention on them. We propose an additional evaluation method for the lab to complement the current evaluation practice in the field.
The main contributions of this thesis are (1) an exhaustive classification and a more detailed look at manual peripheral interaction for tangible, touch and freehand interaction. Based on these exploration with all three interaction styles, we offer (2) implications in terms of overall benefits of peripheral interaction, learnability and habituation, visual and mental attention, feedback and handedness for future peripheral interaction design. Finally, derived from a diverse set of user studies, we assess (3) evaluation strategies enriching the design process for peripheral interaction.In unserem tĂ€glichen Leben fĂŒhren wir eine groĂe Anzahl an AktivitĂ€ten parallel aus ohne uns explizit darauf zu konzentrieren. Wir trinken Tee wĂ€hrend wir ein Buch lesen, wir signalisieren einem Kollegen durch eine Handgeste, dass wir gerade konzentriert sind und er einen Moment warten soll oder wir gehen ein paar Schritte rĂŒckwĂ€rts wĂ€hrend wir fotografieren. Viele dieser AktivitĂ€ten - wie beispielsweise Trinken, Gestikulieren und Laufen - sind an sich komplex. Durch Training werden diese TĂ€tigkeiten allerdings Teil unserer Routinen und Gewohnheiten, und beanspruchen daher nur noch wenig oder sogar keine Aufmerksamkeit. Im Gegensatz dazu, verlangen digitale GerĂ€te meist unsere volle Aufmerksamkeit wĂ€hrend der Interaktion. Um - oftmals nur kleine - Aufgaben durchzufĂŒhren, mĂŒssen wir Fenster wechseln, prĂ€zise Aktionen durchfĂŒhren (z.B. mit dem Mauszeiger zielen) und werden dabei durch die Systeme zu einem Kontext- und Fokuswechsel gezwungen. Periphere Interaktion hingegen macht sich menschliche FĂ€higkeiten wie geteilte Aufmerksamkeit, das rĂ€umliche GedĂ€chtnis und Propriozeption zu Nutze um Interaktion mit digitalen GerĂ€ten am Rande der Aufmerksamkeit also der Peripherie zu ermöglichen -- quasi-parallel zu einem anderen PrimĂ€rtask.
In dieser Arbeit untersuchen wir Periphere Interaktion am Computerarbeitsplatz. Dabei betrachten wir drei verschiedene Interaktionsstile: Begreifbare Interaktion (graspable), Touch Eingabe und Freiraum Gestik (freehand). StaTube untersucht Begreifbare Interaktion am Beispiel von Instant Messaging, wĂ€hrend die Appointment Projection einfache Wischgesten nutzt, um Informationen nahender Termine verfĂŒgbar zu machen. Diese beiden Untersuchungen betrachten jeweils einen Interaktionsstil und beleuchten erste Vorteile, die durch Periphere Interaktion erzielt werden können. Aufbauend darauf fĂŒhren wir zwei vergleichende Studien zwischen allen drei Interaktionsstilen durch. Als Anwendungsszenarien dienen Musiksteuerung und der Umgang mit Benachrichtigungsfenstern. Alle drei Interaktionsstile können erfolgreich fĂŒr Periphere Interaktion eingesetzt werden, haben aber verschiedene Vor- und Nachteile. Die letzte Gruppe von Studien befasst sich mit dem rĂ€umlichen GedĂ€chtnis in 2D und 3D. Das Unadorned Desk nutzt den physikalischen Raum neben dem Desktop Computer um virtuelle Objekte, beispielsweise Funktionen, Anwendungen oder Werkzeuge, zu lagern. DarĂŒber hinaus ist die Evaluation von Peripherer Interaktion anspruchsvoll, da sich die Systeme in die Umwelt integrieren und gerade keine Aufmerksamkeit auf sich ziehen sollen. Wir schlagen eine Evaluationsmethode fĂŒr das Labor vor, um die derzeitig vorherrschenden Evaluationsmethoden in diesem Forschungsfeld zu ergĂ€nzen.
Die KernbeitrĂ€ge dieser Arbeit sind eine (1) umfassende Klassifizierung und ein detaillierter Blick auf manuelle Periphere Interaktion, namentlich Begreifbare Interaktion, Touch Eingabe und Freiraum Gestik. Basierend auf unseren Untersuchungen ziehen wir (2) Schlussfolgerungen, die den generellen Nutzen von Peripherer Interaktion darlegen und Bereiche wie die Erlernbarkeit und Gewöhnung, visuelle und mentale Aufmerksamkeit, Feedback so wie HĂ€ndigkeit beleuchten um zukĂŒnftige Projekte im Bereich der Peripheren Interaktion zu unterstĂŒtzen. Aufbauend auf den verschiedenen Nutzerstudien, diskutieren wir Evaluationsstrategien um den Entwicklungsprozess Peripherer Interaktion zu unterstĂŒtzen
A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems
This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute lâinterazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei piĂč comuni campi di studio dellâinterazione tangibile e dellâinterazione gestuale. Sfruttando le abilitĂ innate dellâuomo di manipolare oggetti fisici e di comunicare con i gesti, lâinterazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando lâattenzione sul nostro mondo reale, al di lĂ dello schermo dei computer o degli smartphone. PoichĂ© lâinterazione gestuale tangibile Ăš un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per lâInterazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nellâambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: lâinterazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e lâinterazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalitĂ di interazione con il sistema di infotainment. Per il secondo campo di applicazione, Ăš stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dellâinterazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per lâinterazione nella casa intelligente, Ăš stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. Lâanalisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo
Design and recognition of microgestures for always-available input
Gestural user interfaces for computing devices most commonly require the user to have at least one hand free to interact with the device, for example, moving a mouse, touching a screen, or performing mid-air gestures. Consequently, users find it difficult to operate computing devices while holding or manipulating everyday objects. This limits the users from interacting with the digital world during a significant portion of their everyday activities, such as, using tools in the kitchen or workshop, carrying items, or workout with sports equipment. This thesis pushes the boundaries towards the bigger goal of enabling always-available input. Microgestures have been recognized for their potential to facilitate direct and subtle interactions. However, it remains an open question how to interact using gestures with computing devices when both of the userâs hands are occupied holding everyday objects. We take a holistic approach and focus on three core contributions: i) To understand end-users preferences, we present an empirical analysis of usersâ choice of microgestures when holding objects of diverse geometries. Instead of designing a gesture set for a specific object or geometry and to identify gestures that generalize, this thesis leverages the taxonomy of grasp types established from prior research. ii) We tackle the critical problem of avoiding false activation by introducing a novel gestural input concept that leverages a single-finger movement, which stands out from everyday finger motions during holding and manipulating objects. Through a data-driven approach, we also systematically validate the conceptâs robustness with different everyday actions. iii) While full sensor coverage on the userâs hand would allow detailed hand-object interaction, minimal instrumentation is desirable for real-world use. This thesis addresses the problem of identifying sparse sensor layouts. We present the first rapid computational method, along with a GUI-based design tool that enables iterative design based on the designerâs high-level requirements. Furthermore, we demonstrate that minimal form-factor devices, like smart rings, can be used to effectively detect microgestures in hands-free and busy scenarios. Overall, the presented findings will serve as both conceptual and technical foundations for enabling interaction with computing devices wherever and whenever users need them.Benutzerschnittstellen fĂŒr ComputergerĂ€te auf Basis von Gesten erfordern fĂŒr eine Interaktion meist mindestens eine freie Hand, z.B. um eine Maus zu bewegen, einen Bildschirm zu berĂŒhren oder Gesten in der Luft auszufĂŒhren. Daher ist es fĂŒr Nutzer schwierig, GerĂ€te zu bedienen, wĂ€hrend sie GegenstĂ€nde halten oder manipulieren. Dies schrĂ€nkt die Interaktion mit der digitalen Welt wĂ€hrend eines GroĂteils ihrer alltĂ€glichen AktivitĂ€ten ein, etwa wenn sie KĂŒchengerĂ€te oder Werkzeug verwenden, GegenstĂ€nde tragen oder mit SportgerĂ€ten trainieren. Diese Arbeit erforscht neue Wege in Richtung des gröĂeren Ziels, immer verfĂŒgbare Eingaben zu ermöglichen. Das Potential von Mikrogesten fĂŒr die Erleichterung von direkten und feinen Interaktionen wurde bereits erkannt. Die Frage, wie der Nutzer mit GerĂ€ten interagiert, wenn beide HĂ€nde mit dem Halten von GegenstĂ€nden belegt sind, bleibt jedoch offen. Wir verfolgen einen ganzheitlichen Ansatz und konzentrieren uns auf drei KernbeitrĂ€ge: i) Um die PrĂ€ferenzen der Endnutzer zu verstehen, prĂ€sentieren wir eine empirische Analyse der Wahl von Mikrogesten beim Halten von Objekte mit diversen Geometrien. Anstatt einen Satz an Gesten fĂŒr ein bestimmtes Objekt oder eine bestimmte Geometrie zu entwerfen, nutzt diese Arbeit die aus frĂŒheren Forschungen stammenden Taxonomien an Griff-Typen. ii) Wir adressieren das Problem falscher Aktivierungen durch ein neuartiges Eingabekonzept, das die sich von alltĂ€glichen Fingerbewegungen abhebende Bewegung eines einzelnen Fingers nutzt. Durch einen datengesteuerten Ansatz validieren wir zudem systematisch die Robustheit des Konzepts bei diversen alltĂ€glichen Aktionen. iii) Auch wenn eine vollstĂ€ndige Sensorabdeckung an der Hand des Nutzers eine detaillierte Hand-Objekt-Interaktion ermöglichen wĂŒrde, ist eine minimale Ausstattung fĂŒr den Einsatz in der realen Welt wĂŒnschenswert. Diese Arbeit befasst sich mit der Identifizierung reduzierter Sensoranordnungen. Wir prĂ€sentieren die erste, schnelle Berechnungsmethode in einem GUI-basierten Designtool, das iteratives Design basierend auf den Anforderungen des Designers ermöglicht. Wir zeigen zudem, dass GerĂ€te mit minimalem Formfaktor wie smarte Ringe fĂŒr die Erkennung von Mikrogesten verwendet werden können. Insgesamt dienen die vorgestellten Ergebnisse sowohl als konzeptionelle als auch als technische Grundlage fĂŒr die Realisierung von Interaktion mit ComputergerĂ€ten wo und wann immer Nutzer sie benötigen.Bosch Researc
Hatter: Empowering Buskers through a Social App
The gradual decline of cash and proliferation of digital payments have created a radical shift, promising new levels of convenience for consumers today. This may adversely impact the earning potential of artists and performers from the busking community. Since the busking community primarily relies on hard cash/spare change from their patrons, I will argue, the predictions of a cashless society can pose great challenges for the busking community. This thesis investigates how mobile technology might address this phenomenon to augment methods of making monetary and non-monetary contributions to buskers. Research through ethnographic methods and literature review, as well as usability testing, the gathered insights and results projected a foreseeable need for buskers and patrons to realize an exchange via a mobile application called âHatter.â Hatter enables patrons to continue to contribute to buskers, who in turn are empowered to receive social and financial capital, even in a cashless society
The Stretchy Strap: supporting encumbered interaction with guitars
Guitarists struggle to play their instruments while simultaneously using additional computing devices (i.e., encumbered interaction). We explored designs with guitarists through co-design and somaesthetic design workshops, learning that they (unsurprisingly) preferred to focus on playing their guitars and keeping their instrumentsâ material integrity intact. Subsequently, we devised an interactive guitar strap controller, which guitarists found promising for tackling encumbered interaction during instrumental transcription, learning and practice. Our design process highlights three strategies: considering postural interaction, applying somaesthetic design to interactive music technology development, and augmenting guitar accessories