5,522 research outputs found
Recommended from our members
Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series
Recommended from our members
Introduction to location-based mobile learning
[About the book]
The report follows on from a 2-day workshop funded by the STELLAR Network of Excellence as part of their 2009 Alpine Rendez-Vous workshop series and is edited by Elizabeth Brown with a foreword from Mike Sharples. Contributors have provided examples of innovative and exciting research projects and practical applications for mobile learning in a location-sensitive setting, including the sharing of good practice and the key findings that have resulted from this work. There is also a debate about whether location-based and contextual learning results in shallower learning strategies and a section detailing the future challenges for location-based learning
Recommended from our members
Augmenting the field experience: a student-led comparison of techniques and technologies
In this study we report on our experiences of creating and running a student fieldtrip exercise which allowed students to compare a range of approaches to the design of technologies for augmenting landscape scenes. The main study site is around Keswick in the English Lake District, Cumbria, UK, an attractive upland environment popular with tourists and walkers. The aim of the exercise for the students was to assess the effectiveness of various forms of geographic information in augmenting real landscape scenes, as mediated through a range of techniques and technologies. These techniques were: computer-generated acetate overlays showing annotated wireframe views from certain key points; a custom-designed application running on a PDA; a mediascape running on the mScape software on a GPS-enabled mobile phone; Google Earth on a tablet PC; and a head-mounted in-field Virtual Reality system. Each group of students had all five techniques available to them, and were tasked with comparing them in the context of creating a visitor guide to the area centred on the field centre. Here we summarise their findings and reflect upon some of the broader research questions emerging from the project
Design Patterns for Situated Visualization in Augmented Reality
Situated visualization has become an increasingly popular research area in
the visualization community, fueled by advancements in augmented reality (AR)
technology and immersive analytics. Visualizing data in spatial proximity to
their physical referents affords new design opportunities and considerations
not present in traditional visualization, which researchers are now beginning
to explore. However, the AR research community has an extensive history of
designing graphics that are displayed in highly physical contexts. In this
work, we leverage the richness of AR research and apply it to situated
visualization. We derive design patterns which summarize common approaches of
visualizing data in situ. The design patterns are based on a survey of 293
papers published in the AR and visualization communities, as well as our own
expertise. We discuss design dimensions that help to describe both our patterns
and previous work in the literature. This discussion is accompanied by several
guidelines which explain how to apply the patterns given the constraints
imposed by the real world. We conclude by discussing future research directions
that will help establish a complete understanding of the design of situated
visualization, including the role of interactivity, tasks, and workflows.Comment: To appear in IEEE VIS 202
Motion-based Interaction for Head-Mounted Displays
Recent advances in affordable sensing technologies have enabled motion-based interaction (MbI) for head-mounted displays (HMDs). Unlike traditional input devices like the mouse and keyboard, which often offer comparatively limited interaction possibilities (e.g., single-touch interaction), MbI does not have these constraints and is more natural because they reflect more closely people do things in real life. However, several issues exist in MbI for HMDs due to the technical limitations of the sensing and tracking devices, higher degrees of freedom afforded to users, and limited research in the area due to the rapid advancement of HMDs and tracking technologies. This thesis first outlines four core challenges in the design space of MbI for HMDs: (1) boundary awareness for hand-based interaction, (2) efficient hands-free head-based interface for HMDs, (3) efficient and feasible full-body interaction for general tasks with HMDs, and (4) accessible full-body interaction for applications in HMDs. Then, this thesis presents an investigation into the contributions of these challenges in MbI for HMDs. The first challenge is addressed by providing visual feedback during interaction tailored for such technologies. The second challenge is addressed by using a circular layout with a go-and-hit selection style for head-based interaction using text entry as the scenario. In addition, this thesis explores additional interaction mechanisms that leverage the affordances of these techniques, and in doing so, we propose directional full-body motions as an interaction approach to perform general tasks with HDMs as an example to address the third challenge. The last challenge is addressed by (1) exploring the differences between performing full-body interaction for HMDs and common displays (i.e., TV) and (2) providing a set of design guidelines that are specific to current and future HMDs. The results of this thesis show that: (1) visual methods for boundary awareness can help with mid-air hand-based interaction in HMDs; (2) head-based interaction and interfaces that take advantages of MbI, such as a circular interface, can be very efficient and low error hands-free input method for HMDs; (3) directional full-body interaction can be a feasible and efficient interaction approach for general tasks involving HMDs; (4) full-body interaction for applications in HMDs should be designed differently than for traditional displays. In addition to these results, this thesis provides a set of design recommendations and takeaway messages for MbI for HMDs
Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System
Optical see-through head-mounted displays (OST HMDs) are a popular output
medium for mobile Augmented Reality (AR) applications. To date, they lack
efficient text entry techniques. Smartphones are a major text entry medium in
mobile contexts but attentional demands can contribute to accidents while
typing on the go. Mobile multi-display ecologies, such as combined OST
HMD-smartphone systems, promise performance and situation awareness benefits
over single-device use. We study the joint performance of text entry on mobile
phones with text output on optical see-through head-mounted displays. A series
of five experiments with a total of 86 participants indicate that, as of today,
the challenges in such a joint interactive system outweigh the potential
benefits.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics
On page(s): 1-17 Print ISSN: 1077-2626 Online ISSN: 1077-262
- …