102 research outputs found

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201

    An enactive approach to perceptual augmentation in mobility

    Get PDF
    Event predictions are an important constituent of situation awareness, which is a key objective for many applications in human-machine interaction, in particular in driver assistance. This work focuses on facilitating event predictions in dynamic environments. Its primary contributions are 1) the theoretical development of an approach for enabling people to expand their sampling and understanding of spatiotemporal information, 2) the introduction of exemplary systems that are guided by this approach, 3) the empirical investigation of effects functional prototypes of these systems have on human behavior and safety in a range of simulated road traffic scenarios, and 4) a connection of the investigated approach to work on cooperative human-machine systems. More specific contents of this work are summarized as follows: The first part introduces several challenges for the formation of situation awareness as a requirement for safe traffic participation. It reviews existing work on these challenges in the domain of driver assistance, resulting in an identification of the need to better inform drivers about dynamically changing aspects of a scene, including event probabilities, spatial and temporal distances, as well as a suggestion to expand the scope of assistance systems to start informing drivers about relevant scene elements at an early stage. Novel forms of assistance can be guided by different fundamental approaches that target either replacement, distribution, or augmentation of driver competencies. A subsequent differentiation of these approaches concludes that an augmentation-guided paradigm, characterized by an integration of machine capabilities into human feedback loops, can be advantageous for tasks that rely on active user engagement, the preservation of awareness and competence, and the minimization of complexity in human- machine interaction. Consequently, findings and theories about human sensorimotor processes are connected to develop an enactive approach that is consistent with an augmentation perspective on human-machine interaction. The approach is characterized by enabling drivers to exercise new sensorimotor processes through which safety-relevant spatiotemporal information may be sampled. In the second part of this work, a concept and functional prototype for augmenting the perception of traffic dynamics is introduced as a first example for applying principles of this enactive approach. As a loose expression of functional biomimicry, the prototype utilizes a tactile inter- face that communicates temporal distances to potential hazards continuously through stimulus intensity. In a driving simulator study, participants quickly gained an intuitive understanding of the assistance without instructions and demonstrated higher driving safety in safety-critical highway scenarios. But this study also raised new questions such as whether benefits are due to a continuous time-intensity encoding and whether utility generalizes to intersection scenarios or highway driving with low criticality events. Effects of an expanded assistance prototype with lane-independent risk assessment and an option for binary signaling were thus investigated in a separate driving simulator study. Subjective responses confirmed quick signal understanding and a perception of spatial and temporal stimulus characteristics. Surprisingly, even for a binary assistance variant with a constant intensity level, participants reported perceiving a danger-dependent variation in stimulus intensity. They further felt supported by the system in the driving task, especially in difficult situations. But in contrast to the first study, this support was not expressed by changes in driving safety, suggesting that perceptual demands of the low criticality scenarios could be satisfied by existing driver capabilities. But what happens if such basic capabilities are impaired, e.g., due to poor visibility conditions or other situations that introduce perceptual uncertainty? In a third driving simulator study, the driver assistance was employed specifically in such ambiguous situations and produced substantial safety advantages over unassisted driving. Additionally, an assistance variant that adds an encoding of spatial uncertainty was investigated in these scenarios. Participants had no difficulties to understand and utilize this added signal dimension to improve safety. Despite being inherently less informative than spatially precise signals, users rated uncertainty-encoding signals as equally useful and satisfying. This appreciation for transparency of variable assistance reliability is a promising indicator for the feasibility of an adaptive trust calibration in human-machine interaction and marks one step towards a closer integration of driver and vehicle capabilities. A complementary step on the driver side would be to increase transparency about the driver’s mental states and thus allow for mutual adaptation. The final part of this work discusses how such prerequisites of cooperation may be achieved by monitoring mental state correlates observable in human behavior, especially in eye movements. Furthermore, the outlook for an addition of cooperative features also raises new questions about the bounds of identity as well as practical consequences of human-machine systems in which co-adapting agents may exercise sensorimotor processes through one another.Die Vorhersage von Ereignissen ist ein Bestandteil des Situationsbewusstseins, dessen UnterstĂŒtzung ein wesentliches Ziel diverser Anwendungen im Bereich Mensch-Maschine Interaktion ist, insbesondere in der Fahrerassistenz. Diese Arbeit zeigt Möglichkeiten auf, Menschen bei Vorhersagen in dynamischen Situationen im Straßenverkehr zu unterstĂŒtzen. Zentrale BeitrĂ€ge der Arbeit sind 1) eine theoretische Auseinandersetzung mit der Aufgabe, die menschliche Wahrnehmung und das VerstĂ€ndnis von raum-zeitlichen Informationen im Straßenverkehr zu erweitern, 2) die EinfĂŒhrung beispielhafter Systeme, die aus dieser Betrachtung hervorgehen, 3) die empirische Untersuchung der Auswirkungen dieser Systeme auf das Nutzerverhalten und die Fahrsicherheit in simulierten Verkehrssituationen und 4) die VerknĂŒpfung der untersuchten AnsĂ€tze mit Arbeiten an kooperativen Mensch-Maschine Systemen. Die Arbeit ist in drei Teile gegliedert: Der erste Teil stellt einige Herausforderungen bei der Bildung von Situationsbewusstsein vor, welches fĂŒr die sichere Teilnahme am Straßenverkehr notwendig ist. Aus einem Vergleich dieses Überblicks mit frĂŒheren Arbeiten zeigt sich, dass eine Notwendigkeit besteht, Fahrer besser ĂŒber dynamische Aspekte von Fahrsituationen zu informieren. Dies umfasst unter anderem Ereigniswahrscheinlichkeiten, rĂ€umliche und zeitliche Distanzen, sowie eine frĂŒhere Signalisierung relevanter Elemente in der Umgebung. Neue Formen der Assistenz können sich an verschiedenen grundlegenden AnsĂ€tzen der Mensch-Maschine Interaktion orientieren, die entweder auf einen Ersatz, eine Verteilung oder eine Erweiterung von Fahrerkompetenzen abzielen. Die Differenzierung dieser AnsĂ€tze legt den Schluss nahe, dass ein von Kompetenzerweiterung geleiteter Ansatz fĂŒr die BewĂ€ltigung jener Aufgaben von Vorteil ist, bei denen aktiver Nutzereinsatz, die Erhaltung bestehender Kompetenzen und Situationsbewusstsein gefordert sind. Im Anschluss werden Erkenntnisse und Theorien ĂŒber menschliche sensomotorische Prozesse verknĂŒpft, um einen enaktiven Ansatz der Mensch-Maschine Interaktion zu entwickeln, der einer erweiterungsgeleiteten Perspektive Rechnung trĂ€gt. Dieser Ansatz soll es Fahrern ermöglichen, sicherheitsrelevante raum-zeitliche Informationen ĂŒber neue sensomotorische Prozesse zu erfassen. Im zweiten Teil der Arbeit wird ein Konzept und funktioneller Prototyp zur Erweiterung der Wahrnehmung von Verkehrsdynamik als ein erstes Beispiel zur Anwendung der Prinzipien dieses enaktiven Ansatzes vorgestellt. Dieser Prototyp nutzt vibrotaktile Aktuatoren zur Kommunikation von Richtungen und zeitlichen Distanzen zu möglichen Gefahrenquellen ĂŒber die Aktuatorposition und -intensitĂ€t. Teilnehmer einer Fahrsimulationsstudie waren in der Lage, in kurzer Zeit ein intuitives VerstĂ€ndnis dieser Assistenz zu entwickeln, ohne vorher ĂŒber die FunktionalitĂ€t unterrichtet worden zu sein. Sie zeigten zudem ein erhöhtes Maß an Fahrsicherheit in kritischen Verkehrssituationen. Doch diese Studie wirft auch neue Fragen auf, beispielsweise, ob der Sicherheitsgewinn auf kontinuierliche Distanzkodierung zurĂŒckzufĂŒhren ist und ob ein Nutzen auch in weiteren Szenarien vorliegen wĂŒrde, etwa bei Kreuzungen und weniger kritischem longitudinalen Verkehr. Um diesen Fragen nachzugehen, wurden Effekte eines erweiterten Prototypen mit spurunabhĂ€ngiger KollisionsprĂ€diktion, sowie einer Option zur binĂ€ren Kommunikation möglicher Kollisionsrichtungen in einer weiteren Fahrsimulatorstudie untersucht. Auch in dieser Studie bestĂ€tigen die subjektiven Bewertungen ein schnelles VerstĂ€ndnis der Signale und eine Wahrnehmung rĂ€umlicher und zeitlicher Signalkomponenten. Überraschenderweise berichteten Teilnehmer grĂ¶ĂŸtenteils auch nach der Nutzung einer binĂ€ren Assistenzvariante, dass sie eine gefahrabhĂ€ngige Variation in der IntensitĂ€t von taktilen Stimuli wahrgenommen hĂ€tten. Die Teilnehmer fĂŒhlten sich mit beiden Varianten in der Fahraufgabe unterstĂŒtzt, besonders in Situationen, die von ihnen als kritisch eingeschĂ€tzt wurden. Im Gegensatz zur ersten Studie hat sich diese gefĂŒhlte UnterstĂŒtzung nur geringfĂŒgig in einer messbaren SicherheitsverĂ€nderung widergespiegelt. Dieses Ergebnis deutet darauf hin, dass die Wahrnehmungsanforderungen der Szenarien mit geringer KritikalitĂ€t mit den vorhandenen FahrerkapazitĂ€ten erfĂŒllt werden konnten. Doch was passiert, wenn diese FĂ€higkeiten eingeschrĂ€nkt werden, beispielsweise durch schlechte Sichtbedingungen oder Situationen mit erhöhter AmbiguitĂ€t? In einer dritten Fahrsimulatorstudie wurde das Assistenzsystem in speziell solchen Situationen eingesetzt, was zu substantiellen Sicherheitsvorteilen gegenĂŒber unassistiertem Fahren gefĂŒhrt hat. ZusĂ€tzlich zu der vorher eingefĂŒhrten Form wurde eine neue Variante des Prototyps untersucht, welche rĂ€umliche Unsicherheiten der Fahrzeugwahrnehmung in taktilen Signalen kodiert. Studienteilnehmer hatten keine Schwierigkeiten, diese zusĂ€tzliche Signaldimension zu verstehen und die Information zur Verbesserung der Fahrsicherheit zu nutzen. Obwohl sie inherent weniger informativ sind als rĂ€umlich prĂ€zise Signale, bewerteten die Teilnehmer die Signale, die die Unsicherheit ĂŒbermitteln, als ebenso nĂŒtzlich und zufriedenstellend. Solch eine WertschĂ€tzung fĂŒr die Transparenz variabler InformationsreliabilitĂ€t ist ein vielversprechendes Indiz fĂŒr die Möglichkeit einer adaptiven Vertrauenskalibrierung in der Mensch-Maschine Interaktion. Dies ist ein Schritt hin zur einer engeren Integration der FĂ€higkeiten von Fahrer und Fahrzeug. Ein komplementĂ€rer Schritt wĂ€re eine Erweiterung der Transparenz mentaler ZustĂ€nde des Fahrers, wodurch eine wechselseitige Anpassung von Mensch und Maschine möglich wĂ€re. Der letzte Teil dieser Arbeit diskutiert, wie diese Transparenz und weitere Voraussetzungen von Mensch-Maschine Kooperation erfĂŒllt werden könnten, indem etwa Korrelate mentaler ZustĂ€nde, insbesondere ĂŒber das Blickverhalten, ĂŒberwacht werden. Des Weiteren ergeben sich mit Blick auf zusĂ€tzliche kooperative FĂ€higkeiten neue Fragen ĂŒber die Definition von IdentitĂ€t, sowie ĂŒber die praktischen Konsequenzen von Mensch-Maschine Systemen, in denen ko-adaptive Agenten sensomotorische Prozesse vermittels einander ausĂŒben können

    Tactile Displays for Pedestrian Navigation

    Get PDF
    Existing pedestrian navigation systems are mainly visual-based, sometimes with an addition of audio guidance. However, previous research has reported that visual-based navigation systems require a high level of cognitive efforts, contributing to errors and delays. Furthermore, in many situations a person’s visual and auditory channels may be compromised due to environmental factors or may be occupied by other important tasks. Some research has suggested that the tactile sense can effectively be used for interfaces to support navigation tasks. However, many fundamental design and usability issues with pedestrian tactile navigation displays are yet to be investigated. This dissertation investigates human-computer interaction aspects associated with the design of tactile pedestrian navigation systems. More specifically, it addresses the following questions: What may be appropriate forms of wearable devices? What types of spatial information should such systems provide to pedestrians? How do people use spatial information for different navigation purposes? How can we effectively represent such information via tactile stimuli? And how do tactile navigation systems perform? A series of empirical studies was carried out to (1) investigate the effects of tactile signal properties and manipulation on the human perception of spatial data, (2) find out the effective form of wearable displays for navigation tasks, and (3) explore a number of potential tactile representation techniques for spatial data, specifically representing directions and landmarks. Questionnaires and interviews were used to gather information on the use of landmarks amongst people navigating urban environments for different purposes. Analysis of the results of these studies provided implications for the design of tactile pedestrian navigation systems, which we incorporated in a prototype. Finally, field trials were carried out to evaluate the design and address usability issues and performance-related benefits and challenges. The thesis develops an understanding of how to represent spatial information via the tactile channel and provides suggestions for the design and implementation of tactile pedestrian navigation systems. In addition, the thesis classifies the use of various types of landmarks for different navigation purposes. These contributions are developed throughout the thesis building upon an integrated series of empirical studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The temporal pattern of impulses in primary afferents analogously encodes touch and hearing information

    Full text link
    An open question in neuroscience is the contribution of temporal relations between individual impulses in primary afferents in conveying sensory information. We investigated this question in touch and hearing, while looking for any shared coding scheme. In both systems, we artificially induced temporally diverse afferent impulse trains and probed the evoked perceptions in human subjects using psychophysical techniques. First, we investigated whether the temporal structure of a fixed number of impulses conveys information about the magnitude of tactile intensity. We found that clustering the impulses into periodic bursts elicited graded increases of intensity as a function of burst impulse count, even though fewer afferents were recruited throughout the longer bursts. The interval between successive bursts of peripheral neural activity (the burst-gap) has been demonstrated in our lab to be the most prominent temporal feature for coding skin vibration frequency, as opposed to either spike rate or periodicity. Given the similarities between tactile and auditory systems, second, we explored the auditory system for an equivalent neural coding strategy. By using brief acoustic pulses, we showed that the burst-gap is a shared temporal code for pitch perception between the modalities. Following this evidence of parallels in temporal frequency processing, we next assessed the perceptual frequency equivalence between the two modalities using auditory and tactile pulse stimuli of simple and complex temporal features in cross-sensory frequency discrimination experiments. Identical temporal stimulation patterns in tactile and auditory afferents produced equivalent perceived frequencies, suggesting an analogous temporal frequency computation mechanism. The new insights into encoding tactile intensity through clustering of fixed charge electric pulses into bursts suggest a novel approach to convey varying contact forces to neural interface users, requiring no modulation of either stimulation current or base pulse frequency. Increasing control of the temporal patterning of pulses in cochlear implant users might improve pitch perception and speech comprehension. The perceptual correspondence between touch and hearing not only suggests the possibility of establishing cross-modal comparison standards for robust psychophysical investigations, but also supports the plausibility of cross-sensory substitution devices

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschließlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Überblick ĂŒber die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)

    ‘Subtle’ Technology: Design for Facilitating Face-to-Face Interaction for Socially Anxious People

    Get PDF
    PhD thesisShy people have a desire for social interaction but fear being scrutinised and rejected. This conflict results in attention deficits during face-to-face situations. It can cause the social atmosphere to become ‘frozen’ and shy persons to appear reticent. Many of them avoid such challenges, taking up the ‘electronic extroversion’ route and experiencing real-world social isolation. This research is aimed at improving the social skills and experience of shy people. It establishes conceptual frameworks and guidelines for designing computer-mediated tools to amplify shy users’ social cognition while extending conversational resources. Drawing on the theories of Social Objects, ‘natural’ HCI and unobtrusive Ubiquitous Computing, it proposes the Icebreaker Cognitive-Behavioural Model for applying user psychology to the systems’ features and functioning behaviour. Two initial design approaches were developed in forms of Wearable Computer and evaluated in a separate user-centred study. One emphasised the users’ privacy concerns in the form of a direct but covert display of the Vibrosign Armband. Another focused on low-attention demand and low-key interaction preferences – rendered through a peripheral but overt visual display of the Icebreaker T-shirt, triggered by the users’ handshake and disguised in the system’s subtle operation. Quantitative feedback by vibrotactile experts indicated the armband effective in signalling various types of abstract information. However, it added to the mental load and needed a disproportionate of training time. In contrast, qualitative-based feedback from shy users revealed unexpected benefits of the information display made public on the shirt front. It encouraged immediate and fluid interaction by providing a mutual ‘ticket to talk’ and an interpretative gap in the users’ relationship, although the rapid prototype compromised the technology’s subtle characteristics and impeded the users’ social experience. An iterative design extended the Icebreaker approach through a systematic refinement and resulted in the Subtle Design Principle implemented in the Icebreaker Jacket. Its subtle interaction and display modalities were compared to those of a focal-demand social aid, using a mixed-method evaluation. Inferential analysis results indicated the subtle technology more engaging with users’ social aspirations and facilitating a higher degree of unobtrusive experience. Through the Icebreaker model and Subtle Design Principle, together with the exploratory research framework and study outcome, this thesis demonstrates the advantages of using subtle technology to help shy users cope with the challenges of face-to-face interaction and improve their social experience.RCUK under the Digital Economy Doctoral Training scheme, through MAT programme, EPSRC Doctoral Training Centre EP/G03723X/1

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract
    • 

    corecore