5,781 research outputs found

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    User interface design for mobile-based sexual health interventions for young people: Design recommendations from a qualitative study on an online Chlamydia clinical care pathway

    Get PDF
    Background: The increasing pervasiveness of mobile technologies has given potential to transform healthcare by facilitating clinical management using software applications. These technologies may provide valuable tools in sexual health care and potentially overcome existing practical and cultural barriers to routine testing for sexually transmitted infections. In order to inform the design of a mobile health application for STIs that supports self-testing and self-management by linking diagnosis with online care pathways, we aimed to identify the dimensions and range of preferences for user interface design features among young people. Methods: Nine focus group discussions were conducted (n=49) with two age-stratified samples (16 to 18 and 19 to 24 year olds) of young people from Further Education colleges and Higher Education establishments. Discussions explored young people's views with regard to: the software interface; the presentation of information; and the ordering of interaction steps. Discussions were audio recorded and transcribed verbatim. Interview transcripts were analysed using thematic analysis. Results: Four over-arching themes emerged: privacy and security; credibility; user journey support; and the task-technology-context fit. From these themes, 20 user interface design recommendations for mobile health applications are proposed. For participants, although privacy was a major concern, security was not perceived as a major potential barrier as participants were generally unaware of potential security threats and inherently trusted new technology. Customisation also emerged as a key design preference to increase attractiveness and acceptability. Conclusions: Considerable effort should be focused on designing healthcare applications from the patient's perspective to maximise acceptability. The design recommendations proposed in this paper provide a valuable point of reference for the health design community to inform development of mobile-based health interventions for the diagnosis and treatment of a number of other conditions for this target group, while stimulating conversation across multidisciplinary communities

    Classifying public display systems: an input/output channel perspective

    Get PDF
    Public display screens are relatively recent additions to our world, and while they may be as simple as a large screen with minimal input/output features, more recent developments have introduced much richer interaction possibilities supporting a variety of interaction styles. In this paper we propose a framework for classifying public display systems with a view to better understanding how they differ in terms of their interaction channels and how future installations are likely to evolve. This framework is explored through 15 existing public display systems which use mobile phones for interaction in the display space

    A wearable multimodal interface for exploring urban points of interest

    Get PDF
    Locating points of interest (POIs) in cities is typically facilitated by visual aids such as paper maps, brochures, and mobile applications. However, these techniques require visual attention, which ideally should be on the surroundings. Non-visual techniques for navigating towards specific POIs typically lack support for free exploration of the city or more detailed guidance. To overcome these issues, we propose a multimodal, wearable system for alerting the user of nearby recommended POIs. The system, built around a tactile glove, provides audio-tactile cues when a new POI is in the vicinity, and more detailed information and guidance if the user expresses interest in this POI. We evaluated the system in a field study, comparing it to a visual baseline application. The encouraging results show that the glovebased system helps keep the attention on the surroundings and that its performance is on the same level as that of the baseline

    Integrating Haptic Feedback into Mobile Location Based Services

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual - text and images on the screen. Haptic feedback can be an important additional method especially in Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user’s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Transit is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. Trials elicited positive responses from the users who see benefit in being provided with a “heads up” approach to mobile navigation. Results from a memory recall test show that the users of haptic feedback for navigation had better memory recall of the region traversed than the users of landmark images. Haptics integrated into a multi-modal navigation system provides more usable, less distracting but more effective interaction than conventional systems. Enhancements to the current work could include integration of contextual information, detailed large-scale user trials and the exploration of using haptics within confined indoor spaces

    Embedding mobile learning into everyday life settings

    Get PDF
    The increasing ubiquity of smartphones has changed the way we interact with information and acquire new knowledge. The prevalence of personal mobile devices in our everyday lives creates new opportunities for learning that exceed the narrow boundaries of a school’s classroom and provide the foundations for lifelong learning. Learning can now happen whenever and wherever we are; whether on the sofa at home, on the bus during our commute, or on a break at work. However, the flexibility offered by mobile learning also creates its challenges. Being able to learn anytime and anywhere does not necessarily result in learning uptake. Without the school environment’s controlled schedule and teacher guidance, the learners must actively initiate learning activities, keep up repetition schedules, and cope with learning in interruption-prone everyday environments. Both interruptions and infrequent repetition can harm the learning process and long-term memory retention. We argue that current mobile learning applications insufficiently support users in coping with these challenges. In this thesis, we explore how we can utilize the ubiquity of mobile devices to ensure frequent engagement with the content, focusing primarily on language learning and supporting users in dealing with learning breaks and interruptions. Following a user-centered design approach, we first analyzed mobile learning behavior in everyday settings. Based on our findings, we proposed concepts and designs, developed research prototypes, and evaluated them in laboratory and field evaluations with a specific focus on user experience. To better understand users’ learning behavior with mobile devices, we first characterized their interaction with mobile learning apps through a detailed survey and a diary study. Both methods confirmed the enormous diversity in usage situations and preferences. We observed that learning often happens unplanned, infrequently, among the company of friends or family, or while simultaneously performing secondary tasks such as watching TV or eating. The studies further uncovered a significant prevalence of interruptions in everyday settings that affected users’ learning behavior, often leading to suspension and termination of the learning activities. We derived design implications to support learning in diverse situations, particularly aimed at mitigating the adverse effects of multitasking and interruptions. The proposed strategies should help designers and developers create mobile learning applications that adapt to the opportunities and challenges of learning in everyday mobile settings. We explored four main challenges, emphasizing that (1) we need to consider that Learning in Everyday Settings is Diverse and Interruption-prone, (2) learning performance is affected by Irregular and Infrequent Practice Behavior, (3) we need to move From Static to Personalized Learning, and (4) that Interruptions and Long Learning Breaks can Negatively Affect Performance. To tackle these challenges, we propose to embed learning into everyday smartphone interactions, which could foster frequent engagement with – and implicitly personalize – learning content (according to users’ interests and skills). Further, we investigate how memory cues could be applied to support task resumption after interruptions in mobile learning. To confirm that our idea of embedding learning into everyday interactions can increase exposure, we developed an application integrating learning tasks into the smartphone authentication process. Since unlocking the smartphone is a frequently performed action without any other purpose, our subjects appreciated the idea of utilizing this process to perform quick and simple learning interactions. Evidence from a comparative user study showed that embedding learning tasks into the unlocking mechanism led to significantly more interactions with the learning content without impairing the learning quality. We further explored a method for embedding language comprehension assessment into users’ digital reading and listening activities. By applying physiological measurements as implicit input, we reliably detected unknown words during laboratory evaluations. Identifying such knowledge gaps could be used for the provision of in-situ support and to inform the generation of personalized language learning content tailored to users’ interests and proficiency levels. To investigate memory cueing as a concept to support task resumption after interruptions, we complemented a theoretical literature analysis of existing applications with two research probes implementing and evaluating promising design concepts. We showed that displaying memory cues when the user resumes the learning activity after an interruption improves their subjective user experience. A subsequent study presented an outlook on the generalizability of memory cues beyond the narrow use case of language learning. We observed that the helpfulness of memory cues for reflecting on prior learning is highly dependent on the design of the cues, particularly the granularity of the presented information. We consider interactive cues for specific memory reactivation (e.g., through multiple-choice questions) a promising scaffolding concept for connecting individual micro-learning sessions when learning in everyday settings. The tools and applications described in this thesis are a starting point for designing applications that support learning in everyday settings. We broaden the understanding of learning behavior and highlight the impact of interruptions in our busy everyday lives. While this thesis focuses mainly on language learning, the concepts and methods have the potential to be generalized to other domains, such as STEM learning. We reflect on the limitations of the presented concepts and outline future research perspectives that utilize the ubiquity of mobile devices to design mobile learning interactions for everyday settings.Die AllgegenwĂ€rtigkeit von Smartphones verĂ€ndert die Art und Weise wie wir mit Informationen umgehen und Wissen erwerben. Die weite Verbreitung von mobilen EndgerĂ€ten in unserem tĂ€glichen Leben fĂŒhrt zu neuen Möglichkeiten des Lernens, welche ĂŒber die engen Grenzen eines Klassenraumes hinausreichen und das Fundament fĂŒr lebenslanges Lernen schaffen. Lernen kann nun zu jeder Zeit und an jedem Ort stattfinden: auf dem Sofa Zuhause, im Bus wĂ€hrend des Pendelns oder in der Pause auf der Arbeit. Die FlexibilitĂ€t des mobilen Lernens geht jedoch zeitgleich mit Herausforderungen einher. Ohne den kontrollierten Ablaufplan und die UnterstĂŒtzung der Lehrpersonen im schulischen Umfeld sind die Lernenden selbst dafĂŒr verantwortlich, aktiv Lernsitzungen zu initiieren, Wiederholungszyklen einzuhalten und Lektionen in unterbrechungsanfĂ€lligen Alltagssituationen zu meistern. Sowohl Unterbrechungen als auch unregelmĂ€ĂŸige Wiederholung von Inhalten können den Lernprozess behindern und der Langzeitspeicherung der Informationen schaden. Wir behaupten, dass aktuelle mobile Lernanwendungen die Nutzer*innen nur unzureichend in diesen Herausforderungen unterstĂŒtzen. In dieser Arbeit erforschen wir, wie wir uns die AllgegenwĂ€rtigkeit mobiler EndgerĂ€te zunutze machen können, um zu erreichen, dass Nutzer*innen regelmĂ€ĂŸig mit den Lerninhalten interagieren. Wir fokussieren uns darauf, sie im Umgang mit Unterbrechungen und Lernpausen zu unterstĂŒtzen. In einem nutzerzentrierten Designprozess analysieren wir zunĂ€chst das Lernverhalten auf mobilen EndgerĂ€ten in alltĂ€glichen Situationen. Basierend auf den Erkenntnissen schlagen wir Konzepte und Designs vor, entwickeln Forschungsprototypen und werten diese in Labor- und Feldstudien mit Fokus auf User Experience (wörtl. “Nutzererfahrung”) aus. Um das Lernverhalten von Nutzer*innen mit mobilen EndgerĂ€ten besser zu verstehen, versuchen wir zuerst die Interaktionen mit mobilen Lernanwendungen durch eine detaillierte Umfrage und eine Tagebuchstudie zu charakterisieren. Beide Methoden bestĂ€tigen eine enorme Vielfalt von Nutzungssituationen und -prĂ€ferenzen. Wir beobachten, dass Lernen oft ungeplant, unregelmĂ€ĂŸig, im Beisein von Freunden oder Familie, oder wĂ€hrend der AusĂŒbung anderer TĂ€tigkeiten, beispielsweise Fernsehen oder Essen, stattfindet. Die Studien decken zudem Unterbrechungen in Alltagssituationen auf, welche das Lernverhalten der Nutzer*innen beeinflussen und oft zum Aussetzen oder Beenden der LernaktivitĂ€t fĂŒhren. Wir leiten Implikationen ab, um Lernen in vielfĂ€ltigen Situationen zu unterstĂŒtzen und besonders die negativen EinflĂŒsse von Multitasking und Unterbrechungen abzuschwĂ€chen. Die vorgeschlagenen Strategien sollen Designer*innen und Entwickler*innen helfen, mobile Lernanwendungen zu erstellen, welche sich den Möglichkeiten und Herausforderungen von Lernen in Alltagssituationen anpassen. Wir haben vier zentrale Herausforderungen identifiziert: (1) Lernen in Alltagssituationen ist divers und anfĂ€llig fĂŒr Unterbrechungen; (2) Die Lerneffizienz wird durch unregelmĂ€ĂŸiges Wiederholungsverhalten beeinflusst; (3) Wir mĂŒssen von statischem zu personalisiertem Lernen ĂŒbergehen; (4) Unterbrechungen und lange Lernpausen können dem Lernen schaden. Um diese Herausforderungen anzugehen, schlagen wir vor, Lernen in alltĂ€gliche Smartphoneinteraktionen einzubetten. Dies fĂŒhrt zu einer vermehrten BeschĂ€ftigung mit Lerninhalten und könnte zu einer impliziten Personalisierung von diesen anhand der Interessen und FĂ€higkeiten der Nutzer*innen beitragen. Zudem untersuchen wir, wie Memory Cues (wörtl. “GedĂ€chtnishinweise”) genutzt werden können, um das Fortsetzen von Aufgaben nach Unterbrechungen im mobilen Lernen zu erleichtern. Um zu zeigen, dass unsere Idee des Einbettens von Lernaufgaben in alltĂ€gliche Interaktionen wirklich die BeschĂ€ftigung mit diesen erhöht, haben wir eine Anwendung entwickelt, welche Lernaufgaben in den Entsperrprozess von Smartphones integriert. Da die Authentifizierung auf dem MobilgerĂ€t eine hĂ€ufig durchgefĂŒhrte Aktion ist, welche keinen weiteren Mehrwert bietet, begrĂŒĂŸten unsere Studienteilnehmenden die Idee, den Prozess fĂŒr die DurchfĂŒhrung kurzer und einfacher Lerninteraktionen zu nutzen. Ergebnisse aus einer vergleichenden Nutzerstudie haben gezeigt, dass die Einbettung von Aufgaben in den Entsperrprozess zu signifikant mehr Interaktionen mit den Lerninhalten fĂŒhrt, ohne dass die LernqualitĂ€t beeintrĂ€chtigt wird. Wir haben außerdem eine Methode untersucht, welche die Messung von SprachverstĂ€ndnis in die digitalen Lese- und HöraktivitĂ€ten der Nutzer*innen einbettet. Mittels physiologischer Messungen als implizite Eingabe können wir in Laborstudien zuverlĂ€ssig unbekannte Wörter erkennen. Die Aufdeckung solcher WissenslĂŒcken kann genutzt werden, um in-situ UntestĂŒtzung bereitzustellen und um personalisierte Lerninhalte zu generieren, welche auf die Interessen und das Wissensniveau der Nutzer*innen zugeschnitten sind. Um Memory Cues als Konzept fĂŒr die UnterstĂŒtzung der Aufgabenfortsetzung nach Unterbrechungen zu untersuchen, haben wir eine theoretische Literaturanalyse von bestehenden Anwendungen um zwei Forschungsarbeiten erweitert, welche vielversprechende Designkonzepte umsetzen und evaluieren. Wir haben gezeigt, dass die PrĂ€sentation von Memory Cues die subjektive User Experience verbessert, wenn der Nutzer die LernaktivitĂ€t nach einer Unterbrechung fortsetzt. Eine Folgestudie stellt einen Ausblick auf die Generalisierbarkeit von Memory Cues dar, welcher ĂŒber den Tellerrand des Anwendungsfalls Sprachenlernen hinausschaut. Wir haben beobachtet, dass der Nutzen von Memory Cues fĂŒr das Reflektieren ĂŒber gelernte Inhalte stark von dem Design der Cues abhĂ€ngt, insbesondere von der GranularitĂ€t der prĂ€sentierten Informationen. Wir schĂ€tzen interaktive Cues zur spezifischen GedĂ€chtnisaktivierung (z.B. durch Mehrfachauswahlfragen) als einen vielversprechenden UnterstĂŒtzungsansatz ein, welcher individuelle Mikrolerneinheiten im Alltag verknĂŒpfen könnte. Die Werkzeuge und Anwendungen, die in dieser Arbeit beschrieben werden, sind ein Startpunkt fĂŒr das Design von Anwendungen, welche das Lernen in Alltagssituationen unterstĂŒtzen. Wir erweitern das VerstĂ€ndnis, welches wir von Lernverhalten im geschĂ€ftigen Alltagsleben haben und heben den Einfluss von Unterbrechungen in diesem hervor. WĂ€hrend sich diese Arbeit hauptsĂ€chlich auf das Lernen von Sprachen fokussiert, haben die vorgestellten Konzepte und Methoden das Potential auf andere Bereiche ĂŒbertragen zu werden, beispielsweise das Lernen von MINT Themen. Wir reflektieren ĂŒber die Grenzen der prĂ€sentierten Konzepte und skizzieren Perspektiven fĂŒr zukĂŒnftige Forschungsarbeiten, welche sich die AllgegenwĂ€rtigkeit von mobilen EndgerĂ€ten zur Gestaltung von Lernanwendungen fĂŒr den Alltag zunutze machen

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources
    • 

    corecore