16 research outputs found

    DESIGNING AND IMPLEMENTING ACCESSIBLE WEARABLE INTERACTIONS FOR PEOPLE WITH MOTOR IMPAIRMENTS

    Get PDF
    Emerging wearable technologies like fitness bands, smartwatches, and head-mounted displays (HMDs) are entering the mainstream market. Unlike smartphones and tablets, these wearables, worn on the body or clothing, are always available and have the potential to provide quick access to information [7]. For instance, HMDs can provide relatively hands-free interaction compared to smartphones, and smartwatches and activity trackers can collect continuous health and fitness-related information of their wearer. However, there are over 20 million people in the U.S. with upper body motor impairments [133], who may not be able to gain from the potential benefits of these wearables. For example, the small interaction spaces of smartwatches may present accessibility challenges. Yet, few studies have explored the potential impacts or evaluated the accessibility of these wearables or investigated ways to design accessible wearable interactions for people with motor impairments. To inform the design of future wearable technologies, my dissertation investigates three threads of research: (1) assessing the accessibility of wearable technologies like HMDs, smartwatches and fitness trackers; (2) understanding the potential impacts of sharing automatically tracked fitness-related information for people with mobility impairments; and (3) implementing and evaluating accessible interactions for HMDs and smartwatches. As part of my first research thread, I conducted two formative studies investigating the accessibility of HMDs and fitness trackers and found that people with motor impairments experienced accessibility challenges like problematic form factors, irrelevant data tracking and difficulty with existing input. For my second research thread, I investigated the potential impacts of sharing automatically tracked data from fitness trackers with peers with similar impairments and therapists and presented design opportunities to build tools to support sharing. Towards my third research thread, I addressed the earlier issues identified with HMD accessibility by building custom wearable touchpads to control a commercial HMD. Next, I explored the touchscreen and non-touchscreen areas (bezel, wristband and user’s body) of smartwatches for accessible interaction. And, lastly, I built and compared bezel input with touchscreen input for accessible smartwatch interaction. The techniques implemented and evaluated in this dissertation will enable more equitable and independent use of wearable technologies for people with motor impairments

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.Kosketusnäytöistä on muodostunut mobiililaitteiden pääasiallinen käyttöliittymä, ja ne ovat levinneet alkuperäiseltä ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. Työssä tutkitaan uusia vuorovaikutuksen, visualisoinnin ja käyttöliittymäpalautteen keinoja, jotka laajentavat perinteistä kosketusnäytön avulla tapahtuvaa vuorovaikutusta. Näihin liittyen väitöskirjassa esitetään sekä kvantitatiivisia tuloksia että uutta kartoittavia löydöksiä. Erityisesti työ tarkastelee tahatonta kosketusnäytön käyttöä, kosketusnäytön lukitusta, stereoskooppisia kosketusnäyttöjä ja pikoprojektoreiden hyödyntämistä. Lisäksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvät sormien identifioimiseen vuorovaikutuksen yhteydessä, ja fyysisiin, liikettä ohjaaviin rakenteisiin kosketusnäytöllä. Kosketusnäytön käyttöä autossa sekä osana älykästä käsilaukkua tarkastellaan esimerkkeinä käyttökonteksteista. Väitöskirjassa esitetään vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnäyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssä esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. Väitöskirja yhdistää kosketusnäyttöihin liittyviä käyttöliittymäsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja käyttöliittymäsuunnittelijat voivat hyödyntää

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.Kosketusnäytöistä on muodostunut mobiililaitteiden pääasiallinen käyttöliittymä, ja ne ovat levinneet alkuperäiseltä ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. Työssä tutkitaan uusia vuorovaikutuksen, visualisoinnin ja käyttöliittymäpalautteen keinoja, jotka laajentavat perinteistä kosketusnäytön avulla tapahtuvaa vuorovaikutusta. Näihin liittyen väitöskirjassa esitetään sekä kvantitatiivisia tuloksia että uutta kartoittavia löydöksiä. Erityisesti työ tarkastelee tahatonta kosketusnäytön käyttöä, kosketusnäytön lukitusta, stereoskooppisia kosketusnäyttöjä ja pikoprojektoreiden hyödyntämistä. Lisäksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvät sormien identifioimiseen vuorovaikutuksen yhteydessä, ja fyysisiin, liikettä ohjaaviin rakenteisiin kosketusnäytöllä. Kosketusnäytön käyttöä autossa sekä osana älykästä käsilaukkua tarkastellaan esimerkkeinä käyttökonteksteista. Väitöskirjassa esitetään vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnäyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssä esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. Väitöskirja yhdistää kosketusnäyttöihin liittyviä käyttöliittymäsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja käyttöliittymäsuunnittelijat voivat hyödyntää

    HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments

    Get PDF
    Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, and the inability to access that information can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, few have attempted to provide access to nearby visual information through touch. Touch is a highly attuned means of acquiring tactile and spatial information, especially for people with vision impairments. By supporting touch-based access to information, we may help users to better understand how a surface appears (e.g., document layout, clothing patterns), thereby improving the quality of life. To address this gap in research, this dissertation explores methods to augment a visually impaired user’s sense of touch with interactive, real-time computer vision to access information about the physical world. These explorations span three application areas: reading and exploring printed documents, controlling mobile devices, and identifying colors and visual textures. At the core of each application is a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger. To create HandSight, we designed and implemented the physical hardware, developed signal processing and computer vision algorithms, and designed real-time feedback that enables users to interpret visual or digital content. We involve visually impaired users throughout the design and development process, conducting several user studies to assess usability and robustness and to improve our prototype designs. The contributions of this dissertation include: (i) developing and iteratively refining HandSight, a novel wearable system to assist visually impaired users in their daily lives; (ii) evaluating HandSight across a diverse set of tasks, and identifying tradeoffs of a finger-worn approach in terms of physical design, algorithmic complexity and robustness, and usability; and (iii) identifying broader design implications for future wearable systems and for the fields of accessibility, computer vision, augmented and virtual reality, and human-computer interaction

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwärtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwärtige Oberflächen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum über den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die während einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die Oberfläche zu identifizieren. Darüber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener Oberflächen besonders geeignet ist, um vielfältige Interaktionsmodalitäten zu realisieren. Bei der Auswahl der Sensoren müssen jedoch Datenschutzaspekte berücksichtigt werden, und der Kontext kann entscheidend dafür sein, ob und welche Interaktion durchgeführt werden soll

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfügbare und leicht zugängliche Fläche für Interaktion. Jüngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer Geräte geführt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten Epidermisgeräte haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dünn, oft dünner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles Verständnis von Epidermisgeräten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen Ansätzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinäre Analyse von Epidermisgeräten die Designziele und Herausforderungen, die angegangen werden müssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale Geräte unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des Geräts und den taktilen Empfindlichkeitsschwellen sowie der Fähigkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen für die Realisierung epidermaler Geräte ab. Zweitens trägt diese Thesis zu neuartigen Epidermisgeräten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trägt zur Herstellung und zum Design neuartiger Epidermisgeräte bei, die hochgradig an die Haut angepasst sind und Berührungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese Geräte können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum für die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir präsentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und Größen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestätigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-Verhältnis erreicht und eine hohe räumliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da Epidermisgeräte in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestützten Design von multimodalen Epidermisgeräten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten Geräte können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale Aktivität) messen. Darüber hinaus stellen wir eine computergestützte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte Gerätedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der Fähigkeit zur Erfassung physiologischer Signale und der Größe des Geräts. Das grafische Tool ermöglicht die einfache Festlegung von Designpräferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinären Perspektive einen Fahrplan für zukünftige Forschung in diesem Bereich, indem wir die nächsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trägt diese Arbeit zu einem ganzheitlichen Verständnis von Epidermisgeräten bei: Sie liefert ein empirisches und konzeptionelles Verständnis sowie technische Einblicke durch Beiträge zu DIY (Do-It-Yourself), schneller Fertigung und computergestützten Entwurfstechniken

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies
    corecore