20,262 research outputs found

    Using motivation derived from computer gaming in the context of computer based instruction

    Get PDF
    This paper was originally presented at the IEEE Technically Sponsored SAI Computing Conference 2016, London, 13-15 July 2016. Abstract— this paper explores how to exploit game based motivation as a way to promote engagement in computer-based instruction, and in particular in online learning interaction. The paper explores the human psychology of gaming and how this can be applied to learning, the computer mechanics of media presentation, affordances and possibilities, and the emerging interaction of playing games and how this itself can provide a pedagogical scaffolding to learning. In doing so the paper focuses on four aspects of Game Based Motivation and how it may be used; (i) the game player’s perception; (ii) the game designers’ model of how to motivate; (iii) team aspects and social interaction as a motivating factor; (iv) psychological models of motivation. This includes the increasing social nature of computer interaction. The paper concludes with a manifesto for exploiting game based motivation in learning

    Assessing mobile mixed reality affordances as a comparative visualization pedagogy for design communication

    Get PDF
    Spatial visualisation skills and interpretation are critical in the design professions but are difficult for novice designers. There is growing evidence that mixed reality visualisation improves learner outcomes, but often these studies are focused on a single media representation and not on a comparison between media and the underpinning learning outcomes. Results from recent studies highlight the use of comparative visualisation pedagogy in design through learner reflective blogs and pilot studies with experts, but these studies are limited by expense and designs familiar to the learner. With increasing interest in mobile pedagogy, more assessment is required in understanding learner interpretation of comparative mobile mixed reality pedagogy. The aim of this study is to do this by evaluating insights from a first-year architectural design classroom through studying the impact and use of a range of mobile comparative visualisation technologies. Using a design-based research methodology and a usability framework for accessing comparative visualisation, this paper will study the complexities of spatial design in the built environment. Outcomes from the study highlight the positives of the approach but also the improvements required in the delivery of the visualisations to improve on the visibility and visual errors caused by the lack of mobile processing

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure

    Modelling virtual urban environments

    Get PDF
    In this paper, we explore the way in which virtual reality (VR) systems are being broadened to encompass a wide array of virtual worlds, many of which have immediate applicability to understanding urban issues through geocomputation. Wesketch distinctions between immersive, semi-immersive and remote environments in which single and multiple users interact in a variety of ways. We show how suchenvironments might be modelled in terms of ways of navigating within, processes of decision-making which link users to one another, analytic functions that users have to make sense of the environment, and functions through which users can manipulate, change, or design their world. We illustrate these ideas using four exemplars that we have under construction: a multi-user internet GIS for Londonwith extensive links to 3-d, video, text and related media, an exploration of optimal retail location using a semi-immersive visualisation in which experts can explore such problems, a virtual urban world in which remote users as avatars can manipulate urban designs, and an approach to simulating such virtual worlds through morphological modelling based on the digital record of the entire decision-making process through which such worlds are built

    Beyond cute: exploring user types and design opportunities of virtual reality pet games

    Get PDF
    Virtual pet games, such as handheld games like Tamagotchi or video games like Petz, provide players with artificial pet companions or entertaining pet-raising simulations. Prior research has found that virtual pets have the potential to promote learning, collaboration, and empathy among users. While virtual reality (VR) has become an increasingly popular game medium, litle is known about users' expectations regarding game avatars, gameplay, and environments for VR-enabled pet games. We surveyed 780 respondents in an online survey and interviewed 30 participants to understand users' motivation, preferences, and game behavior in pet games played on various medium, and their expectations for VR pet games. Based on our findings, we generated three user types that reflect users' preferences and gameplay styles in VR pet games. We use these types to highlight key design opportunities and recommendations for VR pet games

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii
    • …
    corecore