102 research outputs found

    Handicapping currency design: counterfeit deterrence and visual accessibility in the United States and abroad

    Get PDF
    Despite the increasing use of electronic payments, currency retains an important role in the payments system of every country. Two aspects of currency usage drive currency design worldwide: deterring counterfeiting and making paper currency accessible to the visually impaired. Further, among the world's currencies, only U.S. banknotes are widely owned and used in transactions outside their country of issue (although the euro also has some external circulation). In this article, we compare and contrast major currencies and their design features. We conclude that the designs of the two most widely used currencies in the world-the U.S. dollar and the euro-have successfully deterred counterfeiting; data on other currencies are not public. We also conclude that, among the world's major currencies, U.S. banknotes have the fewest features to assist the visually impaired.Paper money design ; Coinage ; Counterfeits and counterfeiting

    Currency design in the United States and abroad: counterfeit deterrence and visual accessibility

    Get PDF
    Despite the increasing use of electronic payments, currency retains an important role in the payment system of every country. In this article, the authors compare and contrast trade-offs among currency design features, including those primarily intended to deter counterfeiting and those to improve usability by the visually impaired. The authors conclude that periodic changes in the design of currency are an important aspect of counterfeit deterrence and that currency designers worldwide generally have been successful in efforts to deter counterfeiting. At the same time, currency designers have sought to be sensitive to the needs of the visually impaired. Although trade-offs among goals sometimes have forced compromises, new technologies promise banknotes that are both more difficult to counterfeit and more accessible to the visually impaired. Among the world's currencies, U.S. banknotes are the notes most widely used outside their country of issue and thus require special consideration.Paper money design - United States ; Money

    Designing a New Tactile Display Technology and its Disability Interactions

    Get PDF
    People with visual impairments have a strong desire for a refreshable tactile interface that can provide immediate access to full page of Braille and tactile graphics. Regrettably, existing devices come at a considerable expense and remain out of reach for many. The exorbitant costs associated with current tactile displays stem from their intricate design and the multitude of components needed for their construction. This underscores the pressing need for technological innovation that can enhance tactile displays, making them more accessible and available to individuals with visual impairments. This research thesis delves into the development of a novel tactile display technology known as Tacilia. This technology's necessity and prerequisites are informed by in-depth qualitative engagements with students who have visual impairments, alongside a systematic analysis of the prevailing architectures underpinning existing tactile display technologies. The evolution of Tacilia unfolds through iterative processes encompassing conceptualisation, prototyping, and evaluation. With Tacilia, three distinct products and interactive experiences are explored, empowering individuals to manually draw tactile graphics, generate digitally designed media through printing, and display these creations on a dynamic pin array display. This innovation underscores Tacilia's capability to streamline the creation of refreshable tactile displays, rendering them more fitting, usable, and economically viable for people with visual impairments

    Fine-grained Haptics: Sensing and Actuating Haptic Primary Colours (force, vibration, and temperature)

    Get PDF
    This thesis discusses the development of a multimodal, fine-grained visual-haptic system for teleoperation and robotic applications. This system is primarily composed of two complementary components: an input device known as the HaptiTemp sensor (combines “Haptics” and “Temperature”), which is a novel thermosensitive GelSight-like sensor, and an output device, an untethered multimodal finegrained haptic glove. The HaptiTemp sensor is a visuotactile sensor that can sense haptic primary colours known as force, vibration, and temperature. It has novel switchable UV markers that can be made visible using UV LEDs. The switchable markers feature is a real novelty of the HaptiTemp because it can be used in the analysis of tactile information from gel deformation without impairing the ability to classify or recognise images. The use of switchable markers in the HaptiTemp sensor is the solution to the trade-off between marker density and capturing high-resolution images using one sensor. The HaptiTemp sensor can measure vibrations by counting the number of blobs or pulses detected per unit time using a blob detection algorithm. For the first time, temperature detection was incorporated into a GelSight-like sensor, making the HaptiTemp sensor a haptic primary colours sensor. The HaptiTemp sensor can also do rapid temperature sensing with a 643 ms response time for the 31°C to 50°C temperature range. This fast temperature response of the HaptiTemp sensor is comparable to the withdrawal reflex response in humans. This is the first time a sensor can trigger a sensory impulse that can mimic a human reflex in the robotic community. The HaptiTemp sensor can also do simultaneous temperature sensing and image classification using a machine vision camera—the OpenMV Cam H7 Plus. This capability of simultaneous sensing and image classification has not been reported or demonstrated by any tactile sensor. The HaptiTemp sensor can be used in teleoperation because it can communicate or transmit tactile analysis and image classification results using wireless communication. The HaptiTemp sensor is the closest thing to the human skin in tactile sensing, tactile pattern recognition, and rapid temperature response. In order to feel what the HaptiTemp sensor is touching from a distance, a corresponding output device, an untethered multimodal haptic hand wearable, is developed to actuate the haptic primary colours sensed by the HaptiTemp sensor. This wearable can communicate wirelessly and has fine-grained cutaneous feedback to feel the edges or surfaces of the tactile images captured by the HaptiTemp sensor. This untethered multimodal haptic hand wearable has gradient kinesthetic force feedback that can restrict finger movements based on the force estimated by the HaptiTemp sensor. A retractable string from an ID badge holder equipped with miniservos that control the stiffness of the wire is attached to each fingertip to restrict finger movements. Vibrations detected by the HaptiTemp sensor can be actuated by the tapping motion of the tactile pins or by a buzzing minivibration motor. There is also a tiny annular Peltier device, or ThermoElectric Generator (TEG), with a mini-vibration motor, forming thermo-vibro feedback in the palm area that can be activated by a ‘hot’ or ‘cold’ signal from the HaptiTemp sensor. The haptic primary colours can also be embedded in a VR environment that can be actuated by the multimodal hand wearable. A VR application was developed to demonstrate rapid tactile actuation of edges, allowing the user to feel the contours of virtual objects. Collision detection scripts were embedded to activate the corresponding actuator in the multimodal haptic hand wearable whenever the tactile matrix simulator or hand avatar in VR collides with a virtual object. The TEG also gets warm or cold depending on the virtual object the participant has touched. Tests were conducted to explore virtual objects in 2D and 3D environments using Leap Motion control and a VR headset (Oculus Quest 2). Moreover, a fine-grained cutaneous feedback was developed to feel the edges or surfaces of a tactile image, such as the tactile images captured by the HaptiTemp sensor, or actuate tactile patterns in 2D or 3D virtual objects. The prototype is like an exoskeleton glove with 16 tactile actuators (tactors) on each fingertip, 80 tactile pins in total, made from commercially available P20 Braille cells. Each tactor can be controlled individually to enable the user to feel the edges or surfaces of images, such as the high-resolution tactile images captured by the HaptiTemp sensor. This hand wearable can be used to enhance the immersive experience in a virtual reality environment. The tactors can be actuated in a tapping manner, creating a distinct form of vibration feedback as compared to the buzzing vibration produced by a mini-vibration motor. The tactile pin height can also be varied, creating a gradient of pressure on the fingertip. Finally, the integration of the high-resolution HaptiTemp sensor, and the untethered multimodal, fine-grained haptic hand wearable is presented, forming a visuotactile system for sensing and actuating haptic primary colours. Force, vibration, and temperature sensing tests with corresponding force, vibration, and temperature actuating tests have demonstrated a unified visual-haptic system. Aside from sensing and actuating haptic primary colours, touching the edges or surfaces of the tactile images captured by the HaptiTemp sensor was carried out using the fine-grained cutaneous feedback of the haptic hand wearable

    Technical Document Accessibility

    Get PDF
    Electrical and Electronic Engineerin

    A Formal Approach to Computer Aided 2D Graphical Design for Blind People

    Get PDF
    The growth of computer aided drawing systems for blind people (CADB) has long been recognised and has increased in interest within the assistive technology research area. The representation of pictorial data by blind and visually impaired (BVI) people has recently gathered momentum with research and development; however, a survey of published literature on CADB reveals that only marginal research has been focused on the use of a formal approach for on screen spatial orientation, creation and reuse of graphics artefacts. To realise the full potential of CADB, such systems should possess attributes of usability, spatial navigation and shape creation features without which blind users drawing activities are less likely to be achieved. As a result of this, usable, effective and self-reliant CADB have arisen from new assistive Technology (AT) research. This thesis contributes a novel, abstract, formal approach that facilitates BVI users to navigate on the screen, create computer graphics/diagrams using 2D shapes and user-defined images. Moreover, the research addresses the specific issues involved with user language by formulating specific rules that make BVI user interaction with the drawing effective and easier. The formal approach proposed here is descriptive and it is specified at a level of abstraction above the concrete level of system technologies. The proposed approach is unique in problem modelling and syntheses of an abstract computer-based graphics/drawings using a formal set of user interaction commands. This technology has been applied to enable blind users to independently construct drawings to satisfy their specific needs without recourse to a specific technology and without the intervention of support workers. The specification aims to be the foundation for a system scope, investigation guidelines and user-initiated command-driven interaction. Such an approach will allow system designers and developers to proceed with greater conceptual clarity than it is possible with current technologies that is built on concrete system-driven prototypes. In addition to the scope of the research the proposed model has been verified by various types of blind users who have independently constructed drawings to satisfy their specific needs without the intervention of support workers. The effectiveness and usability of the proposed approach has been compared against conventional non-command driven drawing systems by different types of blind users. The results confirm that the abstract formal approach proposed here using command-driven means in the context of CADB enables greater comprehension by BVI users. The innovation can be used for both educational and training purposes. The research, thereby sustaining the claim that the abstract formal approach taken allows for the greater comprehension of the command-driven means in the context of CADB, and how the specification aid the design of such a system

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfĂŒgbare und leicht zugĂ€ngliche FlĂ€che fĂŒr Interaktion. JĂŒngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer GerĂ€te gefĂŒhrt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten EpidermisgerĂ€te haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dĂŒnn, oft dĂŒnner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles VerstĂ€ndnis von EpidermisgerĂ€ten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen AnsĂ€tzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinĂ€re Analyse von EpidermisgerĂ€ten die Designziele und Herausforderungen, die angegangen werden mĂŒssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale GerĂ€te unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des GerĂ€ts und den taktilen Empfindlichkeitsschwellen sowie der FĂ€higkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen fĂŒr die Realisierung epidermaler GerĂ€te ab. Zweitens trĂ€gt diese Thesis zu neuartigen EpidermisgerĂ€ten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trĂ€gt zur Herstellung und zum Design neuartiger EpidermisgerĂ€te bei, die hochgradig an die Haut angepasst sind und BerĂŒhrungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese GerĂ€te können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum fĂŒr die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir prĂ€sentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und GrĂ¶ĂŸen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestĂ€tigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-VerhĂ€ltnis erreicht und eine hohe rĂ€umliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da EpidermisgerĂ€te in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestĂŒtzten Design von multimodalen EpidermisgerĂ€ten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten GerĂ€te können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale AktivitĂ€t) messen. DarĂŒber hinaus stellen wir eine computergestĂŒtzte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte GerĂ€tedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der FĂ€higkeit zur Erfassung physiologischer Signale und der GrĂ¶ĂŸe des GerĂ€ts. Das grafische Tool ermöglicht die einfache Festlegung von DesignprĂ€ferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinĂ€ren Perspektive einen Fahrplan fĂŒr zukĂŒnftige Forschung in diesem Bereich, indem wir die nĂ€chsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trĂ€gt diese Arbeit zu einem ganzheitlichen VerstĂ€ndnis von EpidermisgerĂ€ten bei: Sie liefert ein empirisches und konzeptionelles VerstĂ€ndnis sowie technische Einblicke durch BeitrĂ€ge zu DIY (Do-It-Yourself), schneller Fertigung und computergestĂŒtzten Entwurfstechniken

    Tabletop tangible maps and diagrams for visually impaired users

    Get PDF
    En dĂ©pit de leur omniprĂ©sence et de leur rĂŽle essentiel dans nos vies professionnelles et personnelles, les reprĂ©sentations graphiques, qu'elles soient numĂ©riques ou sur papier, ne sont pas accessibles aux personnes dĂ©ficientes visuelles car elles ne fournissent pas d'informations tactiles. Par ailleurs, les inĂ©galitĂ©s d'accĂšs Ă  ces reprĂ©sentations ne cessent de s'accroĂźtre ; grĂące au dĂ©veloppement de reprĂ©sentations graphiques dynamiques et disponibles en ligne, les personnes voyantes peuvent non seulement accĂ©der Ă  de grandes quantitĂ©s de donnĂ©es, mais aussi interagir avec ces donnĂ©es par le biais de fonctionnalitĂ©s avancĂ©es (changement d'Ă©chelle, sĂ©lection des donnĂ©es Ă  afficher, etc.). En revanche, pour les personnes dĂ©ficientes visuelles, les techniques actuellement utilisĂ©es pour rendre accessibles les cartes et les diagrammes nĂ©cessitent l'intervention de spĂ©cialistes et ne permettent pas la crĂ©ation de reprĂ©sentations interactives. Cependant, les rĂ©centes avancĂ©es dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les prochaines annĂ©es, une augmentation de la quantitĂ© de contenus adaptĂ©s. Cette augmentation doit aller de pair avec le dĂ©veloppement de dispositifs utilisables et abordables en mesure de supporter l'affichage de reprĂ©sentations interactives et rapidement modifiables, tout en Ă©tant accessibles aux personnes dĂ©ficientes visuelles. Certains prototypes de recherche s'appuient sur une reprĂ©sentation numĂ©rique seulement : ils peuvent ĂȘtre instantanĂ©ment modifiĂ©s mais ne fournissent que trĂšs peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur le contenu. D'autres prototypes s'appuient sur une reprĂ©sentation numĂ©rique et physique : bien qu'ils puissent ĂȘtre explorĂ©s tactilement, ce qui est un rĂ©el avantage, ils nĂ©cessitent un support tactile qui empĂȘche toute modification rapide. Quant aux dispositifs similaires Ă  des tablettes Braille, mais avec des milliers de picots, leur coĂ»t est prohibitif. L'objectif de cette thĂšse est de pallier les limitations de ces approches en Ă©tudiant comment dĂ©velopper des cartes et diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a rarement Ă©tĂ© Ă©tudiĂ© pour des utilisateurs dĂ©ficients visuels : les interfaces tangibles, et plus particuliĂšrement les interfaces tangibles sur table. Dans ces interfaces, des objets physiques reprĂ©sentent des informations numĂ©riques et peuvent ĂȘtre manipulĂ©s par l'utilisateur pour interagir avec le systĂšme, ou par le systĂšme lui-mĂȘme pour reflĂ©ter un changement du modĂšle numĂ©rique - on parle alors d'interfaces tangibles sur tables animĂ©es, ou actuated. GrĂące Ă  la conception, au dĂ©veloppement et Ă  l'Ă©valuation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous proposons un ensemble de solutions techniques rĂ©pondant aux spĂ©cificitĂ©s des interfaces tangibles pour des personnes dĂ©ficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue thĂ©orique, nous proposons aussi une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted and visually impaired people's access to graphical representations is constantly growing due to the increasing development and availability of online and dynamic representations that not only give sighted people the opportunity to access large amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the intervention of tactile graphics specialists and result in non-interactive tactile representations. However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital representation only, and although they have the great advantage of being instantly updatable, they provide very limited tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs). In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of existing prototypes to deliver a structured overview of current research

    Taktile Interaktion auf flÀchigen Brailledisplays

    Get PDF
    FĂŒr den Zugang zu grafischen BenutzungsoberflĂ€chen (GUIs) stehen blinden Menschen so genannte Screenreader und Braillezeilen zur VerfĂŒgung. Diese ermöglichen zwar das nicht-visuelle Wahrnehmen textueller Inhalte, allerdings kein effektives Arbeiten mit bildlichen Darstellungen. Neuartige taktile FlĂ€chendisplays können eine geeignete Lösung fĂŒr den interaktiven Zugang zu tastbaren Grafiken darstellen und somit die Interaktionsmöglichkeiten blinder Benutzer im Umgang mit grafischen Anwendungen bereichern. Beispielsweise erlauben derartige GerĂ€te nicht nur das Erkunden rĂ€umlicher Anordnungen, sondern darĂŒber hinaus auch die kombinierte Ausgabe von Braille, Grafik und semi-grafischen Elementen. Um die deutlich grĂ¶ĂŸere Menge an gleichzeitig darstellbaren Informationen beherrschbar zu machen, sind neben entsprechenden Inhaltsaufbereitungen und Navigationsmechanismen auch geeignete Orientierungshilfen bereitzustellen. Im Rahmen der vorliegenden Arbeit wurde am Beispiel der BrailleDis GerĂ€te der Metec AG, welche eine taktile AusgabeflĂ€che von 120 mal 60 Stiften bereitstellen, untersucht, inwieweit flĂ€chige Brailledisplays blinden Menschen eine effektive und effiziente Bedienung grafischer BenutzungsoberflĂ€chen ermöglichen. Neben dem Zugang zur GUI selbst sowie dem Lesen von Texten stellt dabei insbesondere das Arbeiten mit Grafiken einen wichtigen Aspekt dar. Um die Bedienung auf einem taktilen FlĂ€chendisplay zu erleichtern, ist eine konsistente Organisation der Inhalte hilfreich. HierfĂŒr wurde ein neuartiges taktiles Fenstersystem umgesetzt, welches die Ausgabe nicht nur in mehrere disjunkte Bereiche unterteilt, sondern auch verschiedene taktile Darstellungsarten unterstĂŒtzt. Zur Systematisierung der Gestaltung und Evaluation derartiger taktiler BenutzungsoberflĂ€chen sowie der darin stattfindenden Benutzerinteraktionen wurde zunĂ€chst eine Taxonomie erarbeitet. Dabei wurden neben der Interaktion selber, welche durch die Ein-und Ausgabe sowie die Handbewegungen des Benutzers beschrieben werden kann, auch die Benutzerintention in Form von taktilen Elementaraufgaben sowie die technischen Spezifikationen des GerĂ€ts mit einbezogen. Basierend auf der Taxonomie wurden anschließend relevante Aspekte identifiziert, welche in mehreren Benutzerstudien mit insgesamt 46 blinden und hochgradig sehbehinderten Menschen untersucht wurden. Die betrachteten Untersuchungsfragen betrafen dabei einerseits die EffektivitĂ€t der Ausgabe in Form verschiedener taktiler Ansichtsarten sowie die Eingabe und Erkundung durch den Benutzer, andererseits auch Aspekte zur Effizienz konkreter Interaktionstechniken. Als Ergebnis der einzelnen Studien wurden abschließend konkrete Empfehlungen zur Umsetzung von BenutzungsoberflĂ€chen auf flĂ€chigen Brailledisplays gegeben. Diese beinhalten insbesondere Aspekte zur Ergonomie von taktilen FlĂ€chendisplays, zur Anzeige von textuellen Inhalten, zur Darstellung und Interaktion mit grafischen Inhalten sowie zu Orientierungshilfen. Insgesamt konnte mit Hilfe der Benutzerstudien gezeigt werden, dass flĂ€chige Brailledisplays blinden Menschen einen effektiven und effizienten Zugang zu grafischen BenutzungsoberflĂ€chen ermöglichen. Verschiedene taktile Darstellungsarten können dabei das Lösen unterschiedlicher Aufgaben unterstĂŒtzen. Generell erfordert die flĂ€chige Interaktion vom Benutzer allerdings auch die Erweiterung seiner konventionellen Erkundungs-und Eingabestrategien. Die Bereitstellung neuartiger Interaktionstechniken zur UnterstĂŒtzung der Orientierung kann die Effizienz zusĂ€tzlich steigern.Blind people normally use screen readers as well as single-lined refreshable Braille displays for accessing graphical user interfaces (GUIs). These technologies allow for a non-visual perception of textual content but not for an effective handling of visual illustrations. Novel two-dimensional tactile pin-matrix devices are an appropriate solution to interactively access tactual graphics. In this way, they can enrich the interaction possibilities of blind users in dealing with graphical applications. For instance, such devices enable the exploration of spatial arrangements and also combine output of Braille, graphics and semi-graphical elements. To make the high amount of simultaneously presented information perceivable and efficiently usable for blind users, an adequate preparation of content as well as adapted navigation and orientation mechanisms must be provided. In this thesis the BrailleDis devices of Metec AG, which have a tactile output area of 120 times 60 pins, were used. The goal was to investigate to what extent large pin-matrix devices enable blind people to use graphical user interfaces effectively and efficiently. Access to the GUI itself, reading text, and dealing with graphics are the main aspects of the application area of such devices. To facilitate the operation on a two-dimensional pin-matrix device a consistent organization of the content is helpful. Therefore, a novel tactile windowing system was implemented which divides the output area into multiple disjunctive regions and supports diverse tactile information visualizations. Moreover, a taxonomy was developed to systematize the design and evaluation of tactile user interfaces. Apart from interaction that can be described by input and output as well as hand movements, the taxonomy includes user intention in terms of interactive task primitives and technical specifications of the device. Based on the taxonomy, relevant aspects of tactile interaction were identified. These aspects were examined in multiple user studies with a total of 46 blind and visually impaired participants. The following research topics were considered during the user studies: 1. the effectiveness of diverse tactile view types (output), 2. user input and exploration, and 3. the efficiency of specific interaction techniques. As a result, practical recommendations for implementing user interfaces on two-dimensional pin-matrix devices were given. These recommendations include ergonomic issues of physical devices as well as design considerations for textual and graphical content as well as orientation aids. In summary, the user studies showed that two-dimensional pin-matrix devices enable blind people an effective and efficient access to graphical user interfaces. Diverse tactile information visualizations can support users to fulfill various tasks. In general, two-dimensional interaction requires the extension of conventional exploration and input strategies of users. The provision of novel interaction techniques for supporting orientation can help to increase efficiency even more
    • 

    corecore