3,144 research outputs found

    Wearable learning tools

    Get PDF
    In life people must learn whenever and wherever they experience something new. Until recently computing technology could not support such a notion, the constraints of size, power and cost kept computers under the classroom table, in the office or in the home. Recent advances in miniaturization have led to a growing field of research in ‘wearable’ computing. This paper looks at how such technologies can enhance computer‐mediated communications, with a focus upon collaborative working for learning. An experimental system, MetaPark, is discussed, which explores communications, data retrieval and recording, and navigation techniques within and across real and virtual environments. In order to realize the MetaPark concept, an underlying network architecture is described that supports the required communication model between static and mobile users. This infrastructure, the MUON framework, is offered as a solution to provide a seamless service that tracks user location, interfaces to contextual awareness agents, and provides transparent network service switching

    Classifying public display systems: an input/output channel perspective

    Get PDF
    Public display screens are relatively recent additions to our world, and while they may be as simple as a large screen with minimal input/output features, more recent developments have introduced much richer interaction possibilities supporting a variety of interaction styles. In this paper we propose a framework for classifying public display systems with a view to better understanding how they differ in terms of their interaction channels and how future installations are likely to evolve. This framework is explored through 15 existing public display systems which use mobile phones for interaction in the display space

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources

    The potential of physical motion cues: changing people’s perception of robots’ performance

    No full text
    Autonomous robotic systems can automatically perform actions on behalf of users in the domestic environment to help people in their daily activities. Such systems aim to reduce users' cognitive and physical workload, and improve wellbeing. While the benefits of these systems are clear, recent studies suggest that users may misconstrue their performance of tasks. We see an opportunity in designing interaction techniques that improve how users perceive the performance of such systems. We report two lab studies (N=16 each) designed to investigate whether showing physical motion, which is showing the process of a system through movement (that is intrinsic to the system's task), of an autonomous system as it completes its task, affects how users perceive its performance. To ensure our studies are ecologically valid and to motivate participants to provide thoughtful responses we adopted consensus-oriented financial incentives. Our results suggest that physical presence does yield higher performance ratings.<br/

    Embedding mobile learning into everyday life settings

    Get PDF
    The increasing ubiquity of smartphones has changed the way we interact with information and acquire new knowledge. The prevalence of personal mobile devices in our everyday lives creates new opportunities for learning that exceed the narrow boundaries of a school’s classroom and provide the foundations for lifelong learning. Learning can now happen whenever and wherever we are; whether on the sofa at home, on the bus during our commute, or on a break at work. However, the flexibility offered by mobile learning also creates its challenges. Being able to learn anytime and anywhere does not necessarily result in learning uptake. Without the school environment’s controlled schedule and teacher guidance, the learners must actively initiate learning activities, keep up repetition schedules, and cope with learning in interruption-prone everyday environments. Both interruptions and infrequent repetition can harm the learning process and long-term memory retention. We argue that current mobile learning applications insufficiently support users in coping with these challenges. In this thesis, we explore how we can utilize the ubiquity of mobile devices to ensure frequent engagement with the content, focusing primarily on language learning and supporting users in dealing with learning breaks and interruptions. Following a user-centered design approach, we first analyzed mobile learning behavior in everyday settings. Based on our findings, we proposed concepts and designs, developed research prototypes, and evaluated them in laboratory and field evaluations with a specific focus on user experience. To better understand users’ learning behavior with mobile devices, we first characterized their interaction with mobile learning apps through a detailed survey and a diary study. Both methods confirmed the enormous diversity in usage situations and preferences. We observed that learning often happens unplanned, infrequently, among the company of friends or family, or while simultaneously performing secondary tasks such as watching TV or eating. The studies further uncovered a significant prevalence of interruptions in everyday settings that affected users’ learning behavior, often leading to suspension and termination of the learning activities. We derived design implications to support learning in diverse situations, particularly aimed at mitigating the adverse effects of multitasking and interruptions. The proposed strategies should help designers and developers create mobile learning applications that adapt to the opportunities and challenges of learning in everyday mobile settings. We explored four main challenges, emphasizing that (1) we need to consider that Learning in Everyday Settings is Diverse and Interruption-prone, (2) learning performance is affected by Irregular and Infrequent Practice Behavior, (3) we need to move From Static to Personalized Learning, and (4) that Interruptions and Long Learning Breaks can Negatively Affect Performance. To tackle these challenges, we propose to embed learning into everyday smartphone interactions, which could foster frequent engagement with – and implicitly personalize – learning content (according to users’ interests and skills). Further, we investigate how memory cues could be applied to support task resumption after interruptions in mobile learning. To confirm that our idea of embedding learning into everyday interactions can increase exposure, we developed an application integrating learning tasks into the smartphone authentication process. Since unlocking the smartphone is a frequently performed action without any other purpose, our subjects appreciated the idea of utilizing this process to perform quick and simple learning interactions. Evidence from a comparative user study showed that embedding learning tasks into the unlocking mechanism led to significantly more interactions with the learning content without impairing the learning quality. We further explored a method for embedding language comprehension assessment into users’ digital reading and listening activities. By applying physiological measurements as implicit input, we reliably detected unknown words during laboratory evaluations. Identifying such knowledge gaps could be used for the provision of in-situ support and to inform the generation of personalized language learning content tailored to users’ interests and proficiency levels. To investigate memory cueing as a concept to support task resumption after interruptions, we complemented a theoretical literature analysis of existing applications with two research probes implementing and evaluating promising design concepts. We showed that displaying memory cues when the user resumes the learning activity after an interruption improves their subjective user experience. A subsequent study presented an outlook on the generalizability of memory cues beyond the narrow use case of language learning. We observed that the helpfulness of memory cues for reflecting on prior learning is highly dependent on the design of the cues, particularly the granularity of the presented information. We consider interactive cues for specific memory reactivation (e.g., through multiple-choice questions) a promising scaffolding concept for connecting individual micro-learning sessions when learning in everyday settings. The tools and applications described in this thesis are a starting point for designing applications that support learning in everyday settings. We broaden the understanding of learning behavior and highlight the impact of interruptions in our busy everyday lives. While this thesis focuses mainly on language learning, the concepts and methods have the potential to be generalized to other domains, such as STEM learning. We reflect on the limitations of the presented concepts and outline future research perspectives that utilize the ubiquity of mobile devices to design mobile learning interactions for everyday settings.Die AllgegenwĂ€rtigkeit von Smartphones verĂ€ndert die Art und Weise wie wir mit Informationen umgehen und Wissen erwerben. Die weite Verbreitung von mobilen EndgerĂ€ten in unserem tĂ€glichen Leben fĂŒhrt zu neuen Möglichkeiten des Lernens, welche ĂŒber die engen Grenzen eines Klassenraumes hinausreichen und das Fundament fĂŒr lebenslanges Lernen schaffen. Lernen kann nun zu jeder Zeit und an jedem Ort stattfinden: auf dem Sofa Zuhause, im Bus wĂ€hrend des Pendelns oder in der Pause auf der Arbeit. Die FlexibilitĂ€t des mobilen Lernens geht jedoch zeitgleich mit Herausforderungen einher. Ohne den kontrollierten Ablaufplan und die UnterstĂŒtzung der Lehrpersonen im schulischen Umfeld sind die Lernenden selbst dafĂŒr verantwortlich, aktiv Lernsitzungen zu initiieren, Wiederholungszyklen einzuhalten und Lektionen in unterbrechungsanfĂ€lligen Alltagssituationen zu meistern. Sowohl Unterbrechungen als auch unregelmĂ€ĂŸige Wiederholung von Inhalten können den Lernprozess behindern und der Langzeitspeicherung der Informationen schaden. Wir behaupten, dass aktuelle mobile Lernanwendungen die Nutzer*innen nur unzureichend in diesen Herausforderungen unterstĂŒtzen. In dieser Arbeit erforschen wir, wie wir uns die AllgegenwĂ€rtigkeit mobiler EndgerĂ€te zunutze machen können, um zu erreichen, dass Nutzer*innen regelmĂ€ĂŸig mit den Lerninhalten interagieren. Wir fokussieren uns darauf, sie im Umgang mit Unterbrechungen und Lernpausen zu unterstĂŒtzen. In einem nutzerzentrierten Designprozess analysieren wir zunĂ€chst das Lernverhalten auf mobilen EndgerĂ€ten in alltĂ€glichen Situationen. Basierend auf den Erkenntnissen schlagen wir Konzepte und Designs vor, entwickeln Forschungsprototypen und werten diese in Labor- und Feldstudien mit Fokus auf User Experience (wörtl. “Nutzererfahrung”) aus. Um das Lernverhalten von Nutzer*innen mit mobilen EndgerĂ€ten besser zu verstehen, versuchen wir zuerst die Interaktionen mit mobilen Lernanwendungen durch eine detaillierte Umfrage und eine Tagebuchstudie zu charakterisieren. Beide Methoden bestĂ€tigen eine enorme Vielfalt von Nutzungssituationen und -prĂ€ferenzen. Wir beobachten, dass Lernen oft ungeplant, unregelmĂ€ĂŸig, im Beisein von Freunden oder Familie, oder wĂ€hrend der AusĂŒbung anderer TĂ€tigkeiten, beispielsweise Fernsehen oder Essen, stattfindet. Die Studien decken zudem Unterbrechungen in Alltagssituationen auf, welche das Lernverhalten der Nutzer*innen beeinflussen und oft zum Aussetzen oder Beenden der LernaktivitĂ€t fĂŒhren. Wir leiten Implikationen ab, um Lernen in vielfĂ€ltigen Situationen zu unterstĂŒtzen und besonders die negativen EinflĂŒsse von Multitasking und Unterbrechungen abzuschwĂ€chen. Die vorgeschlagenen Strategien sollen Designer*innen und Entwickler*innen helfen, mobile Lernanwendungen zu erstellen, welche sich den Möglichkeiten und Herausforderungen von Lernen in Alltagssituationen anpassen. Wir haben vier zentrale Herausforderungen identifiziert: (1) Lernen in Alltagssituationen ist divers und anfĂ€llig fĂŒr Unterbrechungen; (2) Die Lerneffizienz wird durch unregelmĂ€ĂŸiges Wiederholungsverhalten beeinflusst; (3) Wir mĂŒssen von statischem zu personalisiertem Lernen ĂŒbergehen; (4) Unterbrechungen und lange Lernpausen können dem Lernen schaden. Um diese Herausforderungen anzugehen, schlagen wir vor, Lernen in alltĂ€gliche Smartphoneinteraktionen einzubetten. Dies fĂŒhrt zu einer vermehrten BeschĂ€ftigung mit Lerninhalten und könnte zu einer impliziten Personalisierung von diesen anhand der Interessen und FĂ€higkeiten der Nutzer*innen beitragen. Zudem untersuchen wir, wie Memory Cues (wörtl. “GedĂ€chtnishinweise”) genutzt werden können, um das Fortsetzen von Aufgaben nach Unterbrechungen im mobilen Lernen zu erleichtern. Um zu zeigen, dass unsere Idee des Einbettens von Lernaufgaben in alltĂ€gliche Interaktionen wirklich die BeschĂ€ftigung mit diesen erhöht, haben wir eine Anwendung entwickelt, welche Lernaufgaben in den Entsperrprozess von Smartphones integriert. Da die Authentifizierung auf dem MobilgerĂ€t eine hĂ€ufig durchgefĂŒhrte Aktion ist, welche keinen weiteren Mehrwert bietet, begrĂŒĂŸten unsere Studienteilnehmenden die Idee, den Prozess fĂŒr die DurchfĂŒhrung kurzer und einfacher Lerninteraktionen zu nutzen. Ergebnisse aus einer vergleichenden Nutzerstudie haben gezeigt, dass die Einbettung von Aufgaben in den Entsperrprozess zu signifikant mehr Interaktionen mit den Lerninhalten fĂŒhrt, ohne dass die LernqualitĂ€t beeintrĂ€chtigt wird. Wir haben außerdem eine Methode untersucht, welche die Messung von SprachverstĂ€ndnis in die digitalen Lese- und HöraktivitĂ€ten der Nutzer*innen einbettet. Mittels physiologischer Messungen als implizite Eingabe können wir in Laborstudien zuverlĂ€ssig unbekannte Wörter erkennen. Die Aufdeckung solcher WissenslĂŒcken kann genutzt werden, um in-situ UntestĂŒtzung bereitzustellen und um personalisierte Lerninhalte zu generieren, welche auf die Interessen und das Wissensniveau der Nutzer*innen zugeschnitten sind. Um Memory Cues als Konzept fĂŒr die UnterstĂŒtzung der Aufgabenfortsetzung nach Unterbrechungen zu untersuchen, haben wir eine theoretische Literaturanalyse von bestehenden Anwendungen um zwei Forschungsarbeiten erweitert, welche vielversprechende Designkonzepte umsetzen und evaluieren. Wir haben gezeigt, dass die PrĂ€sentation von Memory Cues die subjektive User Experience verbessert, wenn der Nutzer die LernaktivitĂ€t nach einer Unterbrechung fortsetzt. Eine Folgestudie stellt einen Ausblick auf die Generalisierbarkeit von Memory Cues dar, welcher ĂŒber den Tellerrand des Anwendungsfalls Sprachenlernen hinausschaut. Wir haben beobachtet, dass der Nutzen von Memory Cues fĂŒr das Reflektieren ĂŒber gelernte Inhalte stark von dem Design der Cues abhĂ€ngt, insbesondere von der GranularitĂ€t der prĂ€sentierten Informationen. Wir schĂ€tzen interaktive Cues zur spezifischen GedĂ€chtnisaktivierung (z.B. durch Mehrfachauswahlfragen) als einen vielversprechenden UnterstĂŒtzungsansatz ein, welcher individuelle Mikrolerneinheiten im Alltag verknĂŒpfen könnte. Die Werkzeuge und Anwendungen, die in dieser Arbeit beschrieben werden, sind ein Startpunkt fĂŒr das Design von Anwendungen, welche das Lernen in Alltagssituationen unterstĂŒtzen. Wir erweitern das VerstĂ€ndnis, welches wir von Lernverhalten im geschĂ€ftigen Alltagsleben haben und heben den Einfluss von Unterbrechungen in diesem hervor. WĂ€hrend sich diese Arbeit hauptsĂ€chlich auf das Lernen von Sprachen fokussiert, haben die vorgestellten Konzepte und Methoden das Potential auf andere Bereiche ĂŒbertragen zu werden, beispielsweise das Lernen von MINT Themen. Wir reflektieren ĂŒber die Grenzen der prĂ€sentierten Konzepte und skizzieren Perspektiven fĂŒr zukĂŒnftige Forschungsarbeiten, welche sich die AllgegenwĂ€rtigkeit von mobilen EndgerĂ€ten zur Gestaltung von Lernanwendungen fĂŒr den Alltag zunutze machen

    Mobile Location Based Services: Non-visual Feedback Using Haptics

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual, text and images on the screen. In this paper, we discuss our Haptic Interaction Model which describes the integration of haptic feedback into Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user‟s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Alert is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. User trials have elicited positive response from the users. Haptics integrated into a multi-modal navigation system and other mobile location based services provides more usable, less distracting but more effective interaction than conventional systems

    Mobile Location Based Services: Non-visual Feedback Using Haptics

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual, text and images on the screen. In this paper, we discuss our Haptic Interaction Model which describes the integration of haptic feedback into Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user‟s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Alert is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. User trials have elicited positive response from the users. Haptics integrated into a multi-modal navigation system and other mobile location based services provides more usable, less distracting but more effective interaction than conventional systems
    • 

    corecore