26 research outputs found

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    LOW-RESOLUTION CUSTOMIZABLE UBIQUITOUS DISPLAYS

    Get PDF
    In a conventional display, pixels are constrained within the rectangular or circular boundaries of the device. This thesis explores moving pixels from a screen into the surrounding environment to form ubiquitous displays. The surrounding environment can include a human, walls, ceiling, and floor. To achieve this goal, we explore the idea of customizable displays: displays that can be customized in terms of shapes, sizes, resolutions, and locations to fit into the existing infrastructure. These displays require pixels that can easily combine to create different display layouts and provide installation flexibility. To build highly customizable displays, we need to design pixels with a higher level of independence in its operation. This thesis shows different display designs that use pixels with pixel independence ranging from low to high. Firstly, we explore integrating pixels into clothing using battery-powered tethered LEDs to shine information through pockets. Secondly, to enable integrating pixels into the architectural surroundings, we explore using battery-powered untethered pixels that allow building displays of different shapes and sizes on a desired surface. The display can show images and animations on the custom display configuration. Thirdly, we explore the design of a solar-powered independent pixel that can integrate into walls or construction materials to form a display. These pixels overcome the need to recharge them explicitly. Lastly, we explore the design of a mechanical pixel element that can be embedded into construction material to form display panels. The information on these displays is updated manually when a user brushes over the pixels. Our work takes a step forward in designing pixels with higher operation independence to envision a future of displays anywhere and everywhere

    Interacting with Smart Environments: Users, Interfaces, and Devices

    Get PDF
    A Smart Environment is an environment enriched with disappearing devices, acting together to form an “intelligent entity”. In such environments, the computing power pervades the space where the user lives, so it becomes particularly important to investigate the user’s perspective in interacting with her surrounding. Interaction, in fact, occurs when a human performs some kind of activity using any computing technology: in this case, the computing technology has an intelligence of its own and can potentially be everywhere. There is no well-defined interaction situation or context, and interaction can happen casually or accidentally. The objective of this dissertation is to improve the interaction between such complex and different entities: the human and the Smart Environment. To reach this goal, this thesis presents four different and innovative approaches to address some of the identified key challenges. Such approaches, then, are validated with four corresponding software solutions, integrated with a Smart Environment, that I have developed and tested with end-users. Taken together, the proposed solutions enable a better interaction between diverse users and their intelligent environments, provide a solid set of requirements, and can serve as a baseline for further investigation on this emerging topic

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.Kosketusnäytöistä on muodostunut mobiililaitteiden pääasiallinen käyttöliittymä, ja ne ovat levinneet alkuperäiseltä ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. Työssä tutkitaan uusia vuorovaikutuksen, visualisoinnin ja käyttöliittymäpalautteen keinoja, jotka laajentavat perinteistä kosketusnäytön avulla tapahtuvaa vuorovaikutusta. Näihin liittyen väitöskirjassa esitetään sekä kvantitatiivisia tuloksia että uutta kartoittavia löydöksiä. Erityisesti työ tarkastelee tahatonta kosketusnäytön käyttöä, kosketusnäytön lukitusta, stereoskooppisia kosketusnäyttöjä ja pikoprojektoreiden hyödyntämistä. Lisäksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvät sormien identifioimiseen vuorovaikutuksen yhteydessä, ja fyysisiin, liikettä ohjaaviin rakenteisiin kosketusnäytöllä. Kosketusnäytön käyttöä autossa sekä osana älykästä käsilaukkua tarkastellaan esimerkkeinä käyttökonteksteista. Väitöskirjassa esitetään vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnäyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssä esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. Väitöskirja yhdistää kosketusnäyttöihin liittyviä käyttöliittymäsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja käyttöliittymäsuunnittelijat voivat hyödyntää

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.Kosketusnäytöistä on muodostunut mobiililaitteiden pääasiallinen käyttöliittymä, ja ne ovat levinneet alkuperäiseltä ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. Työssä tutkitaan uusia vuorovaikutuksen, visualisoinnin ja käyttöliittymäpalautteen keinoja, jotka laajentavat perinteistä kosketusnäytön avulla tapahtuvaa vuorovaikutusta. Näihin liittyen väitöskirjassa esitetään sekä kvantitatiivisia tuloksia että uutta kartoittavia löydöksiä. Erityisesti työ tarkastelee tahatonta kosketusnäytön käyttöä, kosketusnäytön lukitusta, stereoskooppisia kosketusnäyttöjä ja pikoprojektoreiden hyödyntämistä. Lisäksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvät sormien identifioimiseen vuorovaikutuksen yhteydessä, ja fyysisiin, liikettä ohjaaviin rakenteisiin kosketusnäytöllä. Kosketusnäytön käyttöä autossa sekä osana älykästä käsilaukkua tarkastellaan esimerkkeinä käyttökonteksteista. Väitöskirjassa esitetään vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnäyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssä esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. Väitöskirja yhdistää kosketusnäyttöihin liittyviä käyttöliittymäsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja käyttöliittymäsuunnittelijat voivat hyödyntää

    Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications

    Get PDF
    The present book contains all of the articles that were accepted and published in the Special Issue of MDPI’s journal Mathematics titled "Computational Intelligence and Human–Computer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of human–computer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, human–computer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations
    corecore