410 research outputs found

    Exploring Hand-Based Haptic Interfaces for Mobile Interaction Design

    Get PDF
    Visual attention is crucial in mobile environments, not only for staying aware of dynamic situations, but also for safety reasons. However, current mobile interaction design forces the user to focus on the visual interface of the handheld device, thus limiting the user's ability to process visual information from their environment. In response to these issues, a common solution is to encode information with on-device vibrotactile feedback. However, the vibration is transitory and is often difficult to perceive when mobile. Another approach is to make visual interfaces even more dominant with smart glasses, which enable head-up interaction on their see-through interface. Yet, their input methods raise many concerns regarding social acceptability, preventing them from being widely adopted. There is a need to derive feasible interaction techniques for mobile use while maintaining the user's situational awareness, and this thesis argues that solutions could be derived through the exploration of hand-based haptic interfaces. The objective of this research is to provide multimodal interaction for users to better interact with information while maintaining proper attention to the environment in mobile scenarios. Three research areas were identified. The first is developing expressive haptic stimuli, in which the research investigates how static haptic stimuli could be derived. The second is designing mobile spatial interaction with the user's surroundings as content, which manifests situations in which visual attention to the environment is most needed. The last is interacting with the always-on visual interface on smart glasses, the seemingly ideal solution for mobile applications. The three areas extend along the axis of the demand of visual attention on the interface, from non-visual to always-on visual interfaces. Interactive prototypes were constructed and deployed in studies for each research area, including two shape-changing mechanisms feasible for augmenting mobile devices and a spatial-sensing haptic glove featuring mid-air hand-gestural interaction with haptic support. The findings across the three research areas highlight the immediate benefits of incorporating hand-based haptic interfaces into applications. First, shape-changing interfaces can provide static and continuous haptic stimuli for mobile communication. Secondly, enabling direct interaction with real-world landmarks through a haptic glove and leaving visual attention on the surroundings could result in a higher level of immersed experience. Lastly, the users of smart glasses can benefit from the unobtrusive hand-gestural interaction enabled by the isolated tracking technique of a haptic glove. Overall, this work calls for mobile interaction design to consider haptic stimuli beyond on-device vibration, and mobile hardware solutions beyond the handheld form factor. It also invites designers to consider how to confront the competition of cognitive resources among multiple tasks from an interaction design perspective.Visuaalisen huomiokyvyn säilyttäminen mobiililaitteita käytettäessä on tärkeää sekä ympäröivien tilanteiden havainnoimisen että käyttäjän turvallisuuden kannalta. Nykyiset mobiilikäyttöliittymäratkaisut kuitenkin vaativat käyttäjää keskittämään huomionsa mobiililaitteen ruudulle, mikä rajoittaa käyttäjän kykyä käsitellä ympäristöön liittyvää visuaalista informaatiota. Eräs paljon käytetty ratkaisu tähän ongelmaan on esittää informaatiota laitteen värinän avulla. Värinäpohjainen esitystapa on kuitenkin hetkeen sidottu ja siten ohimenevä, eikä sitä ole välttämättä helppo havaita käyttäjän liikkeellä ollessa. Toinen tapa hyödyntää ns. älylaseja visuaalisen informaation esittämiseen. Tämän tavan etuna on läpinäkyvä näyttöpinta, joka ei vaadi keskittämään katsetta erilliseen laitteeseen. Älylasien tyypilliset syötemuodot kuitenkin aiheuttavat ongelmia niiden sosiaaliselle hyväksyttävyydelle, mikä estää niiden laajempaa käyttöönottoa. Niinpä tämän tutkimuksen lähtökohtana on tarve uudenlaisten mobiilikäyttöliittymien suunnittelulle siten, että käyttäjän huomio säilyy ympäristössä. Väitöskirjatutkimuksessa esitetään, että ratkaisu voi pohjautua käsin kosketeltavaan haptiseen rajapintaan. Tutkimuksen tavoitteena on tuottaa mobiilitilanteisiin multimodaalisia käyttöliittymiä, joiden avulla käyttäjä voi vuorovaikuttaa informaation kanssa menettämättä huomiotaan ympäristöstä. Tutkimus keskittyy kolmeen tutkimuskohteeseen. Ensimmäisessä kehitetään ilmaisuvoimaisia haptisia ärsykkeitä tarkastelemalla staattisten haptisten ärsykkeiden suunnittelun mahdollisuuksia. Toinen kohde liittyy tilaan perustuvan vuorovaikutuksen suunnitteluun tilanteessa, jossa käyttäjä vuorovaikuttaa ympäristöön liittyvän informaation kanssa liikkeellä ollessaan, jolloin ympäristön visuaalinen havainnointi on tärkeää. Kolmannessa tutkimuskohteessa kehitetään uudenlainen syötemuoto älylaseille. Nämä kolme tutkimuskohdetta voidaan esittää osina jatkumoa, joka perustuu laitteen vaatiman visuaalisen huomion määrään täysin ei-visuaalisista täysin visuaalisiin käyttöliittymiin. Jokaisen tutkimuskohteen osalta kehitettiin vuorovaikutteisia prototyyppejä: kaksi muotoa muuttavaa mekanismia mobiililaitteiden täydentämiseksi uusilla palautemuodoilla sekä haptinen hansikas, joka mahdollistaa vuorovaikutuksen ilmassa suoritettavien eleiden ja haptisen palautteen avulla. Kaikkien kolmen tutkimuskohteen tulokset korostavat käsin kosketeltavien haptisten rajapintojen etuja käytännön sovelluksissa. Ensinnäkin muotoa muuttavat rajapinnat voivat tuottaa staattisia ja jatkuvia ärsykkeitä, joita voidaan hyödyntää mobiilivuorovaikutuksessa. Toiseksi haptisen hansikkaan mahdollistama suora vuorovaikutus ympäröivien maamerkkien kanssa auttaa säilyttämään visuaalisen huomion ympäristössä ja voi saada aikaan mukaansatempaavamman käyttökokemuksen. Kolmanneksi älylasien käyttäjät hyötyvät haptisen hansikkaan anturien mahdollistamasta huomaamattomasta elevuorovaikutuksesta. Tämä väitöskirja kehottaa mobiilikäyttöliittymien suunnittelijoita ottamaan huomioon muut kuin kädessä pideltävät laitemuodot sekä haptisten ärsykkeiden mahdollisuudet laajemmin kuin laitteen sisäänrakennetun värinäominaisuuden kautta. Väitöstutkimus myös pyytää suunnittelijoita ja muotoilijoita pohtimaan vuorovaikutussuunnittelun näkökulmasta, miten kohdata monisuorittamistilanteissa käyttäjän kognitiivisten resurssien välinen kilpailu

    Instrumentation, Data, And Algorithms For Visually Understanding Haptic Surface Properties

    Get PDF
    Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors\u27 interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Symbiotic human-robot collaborative assembly

    Get PDF

    SLAM for Visually Impaired People: A Survey

    Full text link
    In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users
    corecore