4 research outputs found

    Design of Web-based Tools to Study Blind People?s Touch-Based Interaction with Smartphones

    Get PDF
    Nowadays touchscreen smartphones are the most common kind of mobile devices. However, gesture-based interaction is a difficult task for most visually impaired people, and even more so for blind people. This difficulty is compounded by the lack of standard gestures and the differences between the main screen reader platforms available on the market. Therefore, our goal is to investigate the differences and preferences in touch gesture performance on smartphones among visually impaired people. During our study, we implemented a web-based wireless system to facilitate the capture of participants? gestures. In this paper we present an overview of both the study and the system used

    Understanding Users' Perceived Difficulty of Multi-Touch Gesture Articulation

    No full text
    International audienceWe show that users are consistent in their assessments of the articulation difficulty of multi-touch gestures, even under the many degrees of freedom afforded by multi-touch input, such as (1) various number of fingers touching the surface, (2) various number of strokes that structure the gesture shape, and (3) single-handed and bimanual input. To understand more about perceived difficulty, we characterize gesture articulations captured under these conditions with geometric and kinematic descriptors computed on a dataset of 7,200 samples of 30 distinct gesture types collected from 18 participants. We correlate the values of the objective descriptors with users' subjective assessments of articulation difficulty and report path length, production time, and gesture size as the highest correlators (max Pearson's r=.95). We also report new findings about multi-touch gesture input, e.g., gestures produced with more fingers are larger in size and take more time to produce than single-touch gestures; bimanual articulations are not only faster than single- handed input, but they are also longer in path length, present more strokes, and result in gesture shapes that are deformed horizontally by 35% in average. We use our findings to outline a number of 14 guidelines to assist multi-touch gesture set design, recognizer development, and inform gesture-to-function mappings through the prism of the user-perceived difficulty of gesture articulation

    Enhanced Multi-Touch Gestures for Complex Tasks

    Get PDF
    Recent technological advances have resulted in a major shift, from high-performance notebook and desktop computers -- devices that rely on keyboard and mouse for input -- towards smaller, personal devices like smartphones, tablets and smartwatches which rely primarily on touch input. Users of these devices typically have a relatively high level of skill in using multi-touch gestures to interact with them, but the multi-touch gesture sets that are supported are often restricted to a small subset of one and two-finger gestures, such as tap, double tap, drag, flick, pinch and spread. This is not due to technical limitations, since modern multi-touch smartphones and tablets are capable of accepting at least ten simultaneous points of contact. Likewise, human movement models suggest that humans are capable of richer and more expressive forms of interaction that utilize multiple fingers. This suggests a gap between the technical capabilities of multi-touch devices, the physical capabilities of end-users, and the gesture sets that have been implemented for these devices. Our work explores ways in which we can enrich multi-touch interaction on these devices by expanding these common gesture sets. Simple gestures are fine for simple use cases, but if we want to support a wide range of sophisticated behaviours -- the types of interactions required by expert users -- we need equally sophisticated capabilities from our devices. In this thesis, we refer to these more sophisticated, complex interactions as `enhanced gestures' to distinguish them from common but simple gestures, and to suggest the types of expert scenarios that we are targeting in their design. We do not need to necessarily replace current, familiar gestures, but it makes sense to consider augmenting them as multi-touch becomes more prevalent, and is applied to more sophisticated problems. This research explores issues of approachability and user acceptance around gesture sets. Using pinch-to-zoom as an example, we establish design guidelines for enhanced gestures, and systematically design, implement and evaluate two different types of expert gestures, illustrative of the type of functionality that we might build into future systems

    Multimodales kollaboratives Zeichensystem für blinde Benutzer

    Get PDF
    Bilder und grafische Darstellungen gehören heutzutage zu den gängigen Kommunikationsmitteln und Möglichkeiten des Informationsaustauschs sowie der Wissensvermittlung. Das bildliche Medium kann allerdings, wenn es rein visuell präsentiert wird, ganze Nutzergruppen ausschließen. Blinde Menschen benötigen beispielsweise Alternativtexte oder taktile Darstellungen, um Zugang zu grafischen Informationen erhalten zu können. Diese müssen jedoch an die speziellen Bedürfnisse von blinden und hochgradig sehbehinderten Menschen angepasst sein. Eine Übertragung von visuellen Grafiken in eine taktile Darstellung erfolgt meist durch sehende Grafikautoren und -autorinnen, die teilweise nur wenig Erfahrung auf dem Gebiet der taktilen Grafiktranskription haben. Die alleinige Anwendung von Kriterienkatalogen und Richtlinien über die Umsetzung guter taktiler Grafiken scheint dabei nicht ausreichend zu sein, um qualitativ hochwertige und gut verständliche grafisch-taktile Materialien bereitzustellen. Die direkte Einbeziehung einer sehbehinderten Person in den Transkriptionsprozess soll diese Problematik angehen, um Verständnis- und Qualitätsproblemen vorzubeugen. Großflächige dynamisch taktile Displays können einen nicht-visuellen Zugang zu Grafiken ermöglichen. Es lassen sich so auch dynamische Veränderungen an Grafiken vermitteln. Im Rahmen der vorliegenden Arbeit wurde ein kollaborativer Zeichenarbeitsplatz für taktile Grafiken entwickelt, welcher es unter Einsatz eines taktilen Flächendisplays und auditiver Ausgaben ermöglicht, eine blinde Person aktiv als Lektorin bzw. Lektor in den Entstehungsprozess einer Grafik einzubinden. Eine durchgeführte Evaluation zeigt, dass insbesondere unerfahrene sehende Personen von den Erfahrungen sehbehinderter Menschen im Umgang mit taktilen Medien profitieren können. Im Gegenzug lassen sich mit dem kollaborativen Arbeitsplatz ebenso unerfahrene sehbehinderte Personen im Umgang mit taktilen Darstellungen schulen. Neben Möglichkeiten zum Betrachten und kollaborativen Bearbeiten werden durch den zugänglichen Zeichenarbeitsplatz auch vier verschiedene Modalitäten zur Erzeugung von Formen angeboten: Formenpaletten als Text-Menüs, Gesteneingaben, Freihandzeichnen mittels drahtlosem Digitalisierungsstift und das kamerabasierte Scannen von Objektkonturen. In einer Evaluation konnte gezeigt werden, dass es mit diesen Methoden auch unerfahrenen blinden Menschen möglich ist, selbständig Zeichnungen in guter Qualität zu erstellen. Dabei präferieren sie jedoch robuste und verlässliche Eingabemethoden, wie Text-Menüs, gegenüber Modalitäten, die ein gewisses Maß an Können und Übung voraussetzen oder einen zusätzlichen technisch aufwendigen Aufbau benötigen.Pictures and graphical data are common communication media for conveying information and know\-ledge. However, these media might exclude large user groups, for instance visually impaired people, if they are offered in visual form only. Textual descriptions as well as tactile graphics may offer access to graphical information but have to be adapted to the special needs of visually impaired and blind readers. The translation from visual into tactile graphics is usually implemented by sighted graphic authors, some of whom have little experience in creating proper tactile graphics. Applying only recommendations and best practices for preparing tactile graphics does not seem sufficient to provide intelligible, high-quality tactile materials. Including a visually impaired person in the process of creating a tactile graphic should prevent such quality and intelligibility issues. Large dynamic tactile displays offer non-visual access to graphics; even dynamic changes can be conveyed. As part of this thesis, a collaborative drawing workstation was developed. This workstation utilizes a tactile display as well as auditory output to actively involve a blind person as a lector in the drawing process. The evaluation demonstrates that inexperienced sighted graphic authors, in particular, can be\-ne\-fit from the knowledge of a blind person who is accustomed to handling tactile media. Furthermore, inexperienced visually impaired people may be trained in reading tactile graphics with the help of the collaborative drawing workstation. In addition to exploring and manipulating existing graphics, the accessible drawing workstation offers four different modalities to create tactile shapes: text-based shape-palette-menus, gestural drawing, freehand drawings using a wireless stylus and scanning object silhouettes by a ToF-camera. The evaluation confirms that even untrained blind users can create drawings in good quality by using the accessible drawing workstation. However, users seem to prefer robust, reliable modalities for drawing, such as text menus, over modalities which require a certain level of skill or additional technical effort
    corecore