6 research outputs found
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participantsâ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Mid-Air Haptic Rendering of 2D Geometric Shapes with a Dynamic Tactile Pointer
IEEE An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants' accuracy and confidence of identifying shapes in two controlled experiments (). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios such as in-car interactions or assistive technology in education
Recommended from our members
Touching is believing: creating illusions and feeling of embodiment with mid-air haptic technology
Over the last two decades, the sense of touch has received new attention from the scientiïŹc community.Several haptic devices have been developed to address the complexity of the sense of touch, the latest addition being mid-air (contactless) haptic technology. An interesting series of previous research has suggested an easier way to tackle the complexity of designing convincing tactile sensations by exploiting tactile illusions. Tactile illusions rely on perceptual shortcuts based on the psychophysics of the tactile receptors.
Currently, studies exploring the perceptual space of mid-air haptics and its applicability in the tactile illusions ïŹeld are still limited in number. This thesis aims to contribute to the ïŹeld of Human-Computer Interaction (HCI) by investigating the perceptual design space of ultrasonic mid-air haptics technology.
SpeciïŹcally, in a ïŹrst set of three studies, we investigate the absolute thresholds (minimal amount of a property of astimulus that a user can detect) for control points (CP) at different frequencies on the hand and arm (Study 1). Then we investigate the optimal sampling rate needed to drive the device in an optimal fashion and its relationship with shape size (Study 2). Next, we apply a new technique to increase usersâ performance in a shape discrimination task (Study 3).
In Study 4, we start the exploration of a tactile illusion of movement using contact touch and later, we apply a similar procedure to investigate the feasibility of creating a tactile illusion of movement between the two non-interconnected hands by using mid-air touch (Study 5).
Finally, in Study 6, we explore our sense of touch in VR, while providing an illusion of rain drops through mid-air haptics, to recreate a virtual hand illusion (VHI) to explore the boundaries of our sense of embodiment.
Therefore, the contribution of this work is threefold: a) we contribute by adding new knowledge on the psychophysical space for mid-air haptics, b) we test the potential to create realistic tactile sensations by exploiting tactile illusions with mid-air haptic technology, and c) we demonstrate how tactile illusions mediated by mid-air haptics can convey a sense of embodiment in VR environments
Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoplesâ Building Exploration in the Context of Orientation and Mobility
Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschlieĂlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit.
Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Ăberblick ĂŒber die verschiedenen Interaktionen.
Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung).
Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality.
Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions.
Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches.
The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)
A Formal Approach to Computer Aided 2D Graphical Design for Blind People
The growth of computer aided drawing systems for blind people (CADB) has long been recognised and
has increased in interest within the assistive technology research area. The representation of pictorial
data by blind and visually impaired (BVI) people has recently gathered momentum with research and
development; however, a survey of published literature on CADB reveals that only marginal research
has been focused on the use of a formal approach for on screen spatial orientation, creation and reuse
of graphics artefacts. To realise the full potential of CADB, such systems should possess attributes of
usability, spatial navigation and shape creation features without which blind users drawing activities
are less likely to be achieved. As a result of this, usable, effective and self-reliant CADB have arisen
from new assistive Technology (AT) research.
This thesis contributes a novel, abstract, formal approach that facilitates BVI users to navigate on
the screen, create computer graphics/diagrams using 2D shapes and user-defined images. Moreover,
the research addresses the specific issues involved with user language by formulating specific rules
that make BVI user interaction with the drawing effective and easier. The formal approach proposed
here is descriptive and it is specified at a level of abstraction above the concrete level of system
technologies. The proposed approach is unique in problem modelling and syntheses of an abstract
computer-based graphics/drawings using a formal set of user interaction commands. This technology
has been applied to enable blind users to independently construct drawings to satisfy their specific
needs without recourse to a specific technology and without the intervention of support workers. The
specification aims to be the foundation for a system scope, investigation guidelines and user-initiated
command-driven interaction. Such an approach will allow system designers and developers to proceed
with greater conceptual clarity than it is possible with current technologies that is built on concrete
system-driven prototypes.
In addition to the scope of the research the proposed model has been verified by various types
of blind users who have independently constructed drawings to satisfy their specific needs without
the intervention of support workers. The effectiveness and usability of the proposed approach has
been compared against conventional non-command driven drawing systems by different types of blind
users. The results confirm that the abstract formal approach proposed here using command-driven
means in the context of CADB enables greater comprehension by BVI users. The innovation can be
used for both educational and training purposes. The research, thereby sustaining the claim that the
abstract formal approach taken allows for the greater comprehension of the command-driven means in
the context of CADB, and how the specification aid the design of such a system
Multimodales kollaboratives Zeichensystem fĂŒr blinde Benutzer
Bilder und grafische Darstellungen gehören heutzutage zu den gĂ€ngigen Kommunikationsmitteln und Möglichkeiten des Informationsaustauschs sowie der Wissensvermittlung. Das bildliche Medium kann allerdings, wenn es rein visuell prĂ€sentiert wird, ganze Nutzergruppen ausschlieĂen. Blinde Menschen benötigen beispielsweise Alternativtexte oder taktile Darstellungen, um Zugang zu grafischen Informationen erhalten zu können. Diese mĂŒssen jedoch an die speziellen BedĂŒrfnisse von blinden und hochgradig sehbehinderten Menschen angepasst sein. Eine Ăbertragung von visuellen Grafiken in eine taktile Darstellung erfolgt meist durch sehende Grafikautoren und -autorinnen, die teilweise nur wenig Erfahrung auf dem Gebiet der taktilen Grafiktranskription haben. Die alleinige Anwendung von Kriterienkatalogen und Richtlinien ĂŒber die Umsetzung guter taktiler Grafiken scheint dabei nicht ausreichend zu sein, um qualitativ hochwertige und gut verstĂ€ndliche grafisch-taktile Materialien bereitzustellen. Die direkte Einbeziehung einer sehbehinderten Person in den Transkriptionsprozess soll diese Problematik angehen, um VerstĂ€ndnis- und QualitĂ€tsproblemen vorzubeugen.
GroĂflĂ€chige dynamisch taktile Displays können einen nicht-visuellen Zugang zu Grafiken ermöglichen. Es lassen sich so auch dynamische VerĂ€nderungen an Grafiken vermitteln. Im Rahmen der vorliegenden Arbeit wurde ein kollaborativer Zeichenarbeitsplatz fĂŒr taktile Grafiken entwickelt, welcher es unter Einsatz eines taktilen FlĂ€chendisplays und auditiver Ausgaben ermöglicht, eine blinde Person aktiv als Lektorin bzw. Lektor in den Entstehungsprozess einer Grafik einzubinden. Eine durchgefĂŒhrte Evaluation zeigt, dass insbesondere unerfahrene sehende Personen von den Erfahrungen sehbehinderter Menschen im Umgang mit taktilen Medien profitieren können. Im Gegenzug lassen sich mit dem kollaborativen Arbeitsplatz ebenso unerfahrene sehbehinderte Personen im Umgang mit taktilen Darstellungen schulen.
Neben Möglichkeiten zum Betrachten und kollaborativen Bearbeiten werden durch den zugĂ€nglichen Zeichenarbeitsplatz auch vier verschiedene ModalitĂ€ten zur Erzeugung von Formen angeboten: Formenpaletten als Text-MenĂŒs, Gesteneingaben, Freihandzeichnen mittels drahtlosem Digitalisierungsstift und das kamerabasierte Scannen von Objektkonturen. In einer Evaluation konnte gezeigt werden, dass es mit diesen Methoden auch unerfahrenen blinden Menschen möglich ist, selbstĂ€ndig Zeichnungen in guter QualitĂ€t zu erstellen. Dabei prĂ€ferieren sie jedoch robuste und verlĂ€ssliche Eingabemethoden, wie Text-MenĂŒs, gegenĂŒber ModalitĂ€ten, die ein gewisses MaĂ an Können und Ăbung voraussetzen oder einen zusĂ€tzlichen technisch aufwendigen Aufbau benötigen.Pictures and graphical data are common communication media for conveying information and know\-ledge. However, these media might exclude large user groups, for instance visually impaired people, if they are offered in visual form only. Textual descriptions as well as tactile graphics may offer access to graphical information but have to be adapted to the special needs of visually impaired and blind readers. The translation from visual into tactile graphics is usually implemented by sighted graphic authors, some of whom have little experience in creating proper tactile graphics. Applying only recommendations and best practices for preparing tactile graphics does not seem sufficient to provide intelligible, high-quality tactile materials. Including a visually impaired person in the process of creating a tactile graphic should prevent such quality and intelligibility issues.
Large dynamic tactile displays offer non-visual access to graphics; even dynamic changes can be conveyed. As part of this thesis, a collaborative drawing workstation was developed. This workstation utilizes a tactile display as well as auditory output to actively involve a blind person as a lector in the drawing process. The evaluation demonstrates that inexperienced sighted graphic authors, in particular, can be\-ne\-fit from the knowledge of a blind person who is accustomed to handling tactile media. Furthermore, inexperienced visually impaired people may be trained in reading tactile graphics with the help of the collaborative drawing workstation.
In addition to exploring and manipulating existing graphics, the accessible drawing workstation offers four different modalities to create tactile shapes: text-based shape-palette-menus, gestural drawing, freehand drawings using a wireless stylus and scanning object silhouettes by a ToF-camera. The evaluation confirms that even untrained blind users can create drawings in good quality by using the accessible drawing workstation. However, users seem to prefer robust, reliable modalities for drawing, such as text menus, over modalities which require a certain level of skill or additional technical effort