1,760 research outputs found
Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information
Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation
Location-based technologies for learning
Emerging technologies for learning report - Article exploring location based technologies and their potential for educatio
Investigating affordances of virtual worlds for real world B2C e-commerce
Virtual worlds are three-dimensional (3D) online persistent multi-user environments where users interact through avatars. The literature suggests that virtual worlds can facilitate real world business-to-consumer (B2C) e-commerce. However, few real world businesses have adopted virtual worlds for B2C e-commerce. In this paper, we present results from interviews with consumers in a virtual world to investigate how virtual worlds can support B2C e-commerce. A thematic analysis of the data was conducted to uncover affordances and constraints of virtual worlds for B2C e-commerce. Two affordances (habitability and appearance of realness) and one constraint (demand for specialised skill) were uncovered. The implications of this research for designers are (1) to provide options to consumers that enable them to manage their online reputation, (2) to focus on managing consumersâ expectations and (3) to facilitate learning between consumers
An Introduction to 3D User Interface Design
3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article
Human motion retrieval based on freehand sketch
In this paper, we present an integrated framework of human motion retrieval based on freehand sketch. With some simple rules, the user can acquire a desired motion by sketching several key postures. To retrieve efficiently and accurately by sketch, the 3D postures are projected onto several 2D planes. The limb direction feature is proposed to represent the input sketch and the projected-postures. Furthermore, a novel index structure based on k-d tree is constructed to index the motions in the database, which speeds up the retrieval process. With our posture-by-posture retrieval algorithm, a continuous motion can be got directly or generated by using a pre-computed graph structure. What's more, our system provides an intuitive user interface. The experimental results demonstrate the effectiveness of our method. © 2014 John Wiley & Sons, Ltd
Playing for Data: Ground Truth from Computer Games
Recent progress in computer vision has been driven by high-capacity models
trained on large datasets. Unfortunately, creating large datasets with
pixel-level labels has been extremely costly due to the amount of human effort
required. In this paper, we present an approach to rapidly creating
pixel-accurate semantic label maps for images extracted from modern computer
games. Although the source code and the internal operation of commercial games
are inaccessible, we show that associations between image patches can be
reconstructed from the communication between the game and the graphics
hardware. This enables rapid propagation of semantic labels within and across
images synthesized by the game, with no access to the source code or the
content. We validate the presented approach by producing dense pixel-level
semantic annotations for 25 thousand images synthesized by a photorealistic
open-world computer game. Experiments on semantic segmentation datasets show
that using the acquired data to supplement real-world images significantly
increases accuracy and that the acquired data enables reducing the amount of
hand-labeled real-world data: models trained with game data and just 1/3 of the
CamVid training set outperform models trained on the complete CamVid training
set.Comment: Accepted to the 14th European Conference on Computer Vision (ECCV
2016
Automating content generation for large-scale virtual learning environments using semantic web services
The integration of semantic web services with three-dimensional virtual worlds offers many potential avenues for the creation of dynamic, content-rich environments which can be used to entertain, educate, and inform. One such avenue is the fusion of the large volumes of data from Wiki-based sources with virtual representations of historic locations, using semantics to filter and present data to users in effective and personalisable ways. This paper explores the potential for such integration, addressing challenges ranging from accurately transposing virtual world locales to semantically-linked real world data, to integrating diverse ranges of semantic information sources in a usercentric and seamless fashion. A demonstrated proof-of-concept, using the Rome Reborn model, a detailed 3D representation of Ancient Rome within the Aurelian Walls, shows several advantages that can be gained through the use of existing Wiki and semantic web services to rapidly and automatically annotate content, as well as demonstrating the increasing need for Wiki content to be represented in a semantically-rich form. Such an approach has applications in a range of different contexts, including education, training, and cultural heritage
Transforming pre-service teacher curriculum: observation through a TPACK lens
This paper will discuss an international online collaborative learning experience through the lens of the Technological Pedagogical Content Knowledge (TPACK) framework. The teacher knowledge required to effectively provide transformative learning experiences for 21st century learners in a digital world is complex, situated and changing. The discussion looks beyond the opportunity for knowledge development of content, pedagogy and technology as components of TPACK towards the interaction between those three components. Implications for practice are also discussed. In todayâs technology infused classrooms it is within the realms of teacher educators, practising teaching and pre-service teachers explore and address effective practices using technology to enhance learning
Teaching and learning in virtual worlds: is it worth the effort?
Educators have been quick to spot the enormous potential afforded by virtual worlds for situated and authentic learning, practising tasks with potentially serious consequences in the real world and for bringing geographically dispersed faculty and students together in the same space (Gee, 2007; Johnson and Levine, 2008). Though this potential has largely been realised, it generally isnât without cost in terms of lack of institutional buy-in, steep learning curves for all participants, and lack of a sound theoretical framework to
support learning activities (Campbell, 2009; Cheal, 2007; Kluge & Riley, 2008). This symposium will explore the affordances and issues associated with teaching and learning in virtual worlds, all the time considering the
question: is it worth the effort
Recommended from our members
Educational inclusion and new technologies
The development of new technologies creates affordances with the potential to remove barriers to learning faced by young people. New technologies have therefore been seen as both a panacea for problems in developing inclusive education, and as a way of allowing a diverse range of learners to access and engage with the curriculum in its broadest sense. This chapter critically considers these views by drawing on a range of selected research. This research uses different methodologies and educational contexts to sample different levels of use, and different aspects of new technology. The case studies included here illustrate particular issues in developing and using technology. The cases studies cover: using Tablet PCs in schools, and developing educational robotics as an inclusive curriculum activity, developing pedagogic practice with morphing software and interactive software designed for dyslexic learners and Schome Park, an interactive virtual environment.
The chapter considers how technology is used in these cases and the degree to which is has supported, educational inclusion. This offers an insight into innovative educational practice and research and supports an analysis of the factors which influence the impact of potentially inclusive technolog
- âŠ