23,345 research outputs found

    Supporting Memorization and Problem Solving with Spatial Information Presentations in Virtual Environments

    Get PDF
    While it has been suggested that immersive virtual environments could provide benefits for educational applications, few studies have formally evaluated how the enhanced perceptual displays of such systems might improve learning. Using simplified memorization and problem-solving tasks as representative approximations of more advanced types of learning, we are investigating the effects of providing supplemental spatial information on the performance of learning-based activities within virtual environments. We performed two experiments to investigate whether users can take advantage of a spatial information presentation to improve performance on cognitive processing activities. In both experiments, information was presented either directly in front of the participant or wrapped around the participant along the walls of a surround display. In our first experiment, we found that the spatial presentation caused better performance on a memorization and recall task. To investigate whether the advantages of spatial information presentation extend beyond memorization to higher level cognitive activities, our second experiment employed a puzzle-like task that required critical thinking using the presented information. The results indicate that no performance improvements or mental workload reductions were gained from the spatial presentation method compared to a non-spatial layout for our problem-solving task. The results of these two experiments suggest that supplemental spatial information can support performance improvements for cognitive processing and learning-based activities, but its effectiveness is dependent on the nature of the task and a meaningful use of space

    A Multimedia Approach to Game-Based Training: Exploring the Effects of the Modality and Temporal Contiguity Principles on Learning in a Virtual Environment

    Get PDF
    There is an increasing interest in using video games as a means to deliver training to individuals learning new skills or tasks. However, current research lacks a clear method of developing effective instructional material when these games are used as training tools and explaining how gameplay may affect learning. The literature contains multiple approaches to training and GBT but generally lacks a foundational-level and theoretically relevant approach to how people learn specifically from video games and how to design instructional guidance within these gaming environments. This study investigated instructional delivery within GBT. Video games are a form of multimedia, consisting of both imagery and sounds. The Cognitive Theory of Multimedia Learning (CTML; Mayer 2005) explicitly describes how people learn from multimedia information, consisting of a combination of narration (words) and animation (pictures). This study empirically examined the effects of the modality and temporal contiguity principles on learning in a game-based virtual environment. Based on these principles, it was hypothesized that receiving either voice or embedded training would result in better performance on learning measures. Additionally, receiving a combination of voice and embedded training would lead to better performance on learning measures than all other instructional conditions. A total of 128 participants received training on the role and procedures related to the combat lifesaver - a non-medical soldier who receives additional training on combat-relevant lifesaving medical procedures. Training sessions involved an instructional presentation manipulated along the modality (voice or text) and temporal contiguity (embedded in the game or presented before gameplay) principles. Instructional delivery was manipulated in a 2x2 between-subjects design with four instructional conditions: Upfront-Voice, Upfront-Text, Embedded-Voice, and Embedded-Text. Results indicated that: (1) upfront instruction led to significantly better retention performance than embedded instructional regardless of delivery modality; (2) receiving voice-based instruction led to better transfer performance than text-based instruction regardless of presentation timing; (3) no differences in performance were observed on the simple application test between any instructional conditions; and (4) a significant interaction of modality-by-temporal contiguity was obtained. Simple effects analysis indicated differing effects along modality within the embedded instruction group, with voice recipients performing better than text (p = .012). Individual group comparisons revealed that the upfront-voice group performed better on retention than both embedded groups (p = .006), the embedded-voice group performed better on transfer than the upfront text group (p = .002), and the embedded-voice group performed better on the complex application test than the embedded-text group (p =.012). Findings indicated partial support for the application of the modality and temporal contiguity principles of CTML in interactive GBT. Combining gameplay (i.e., practice) with instructional presentation both helps and hinders working memory\u27s ability to process information. Findings also explain how expanding CTML into game-based training may fundamentally change how a person processes information as a function of the specific type of knowledge being taught. Results will drive future systematic research to test and determine the most effective means of designing instruction for interactive GBT. Further theoretical and practical implications will be discussed

    Hierarchical Event Descriptors (HED): Semi-Structured Tagging for Real-World Events in Large-Scale EEG.

    Get PDF
    Real-world brain imaging by EEG requires accurate annotation of complex subject-environment interactions in event-rich tasks and paradigms. This paper describes the evolution of the Hierarchical Event Descriptor (HED) system for systematically describing both laboratory and real-world events. HED version 2, first described here, provides the semantic capability of describing a variety of subject and environmental states. HED descriptions can include stimulus presentation events on screen or in virtual worlds, experimental or spontaneous events occurring in the real world environment, and events experienced via one or multiple sensory modalities. Furthermore, HED 2 can distinguish between the mere presence of an object and its actual (or putative) perception by a subject. Although the HED framework has implicit ontological and linked data representations, the user-interface for HED annotation is more intuitive than traditional ontological annotation. We believe that hiding the formal representations allows for a more user-friendly interface, making consistent, detailed tagging of experimental, and real-world events possible for research users. HED is extensible while retaining the advantages of having an enforced common core vocabulary. We have developed a collection of tools to support HED tag assignment and validation; these are available at hedtags.org. A plug-in for EEGLAB (sccn.ucsd.edu/eeglab), CTAGGER, is also available to speed the process of tagging existing studies

    Quantifying Cognitive Efficiency of Display in Human-Machine Systems

    Get PDF
    As a side effect of fast growing informational technology, information overload becomes prevalent in the operation of many human-machine systems. Overwhelming information can degrade operational performance because it imposes large mental workload on human operators. One way to address this issue is to improve the cognitive efficiency of display. A cognitively efficient display should be more informative while demanding less mental resources so that an operator can process larger displayed information using their limited working memory and achieve better performance. In order to quantitatively evaluate this display property, a Cognitive Efficiency (CE) metric is formulated as the ratio of the measures of two dimensions: display informativeness and required mental resources (each dimension can be affected by display, human, and contextual factors). The first segment of the dissertation discusses the available measurement techniques to construct the CE metric and initially validates the CE metric with basic discrete displays. The second segment demonstrates that displays showing higher cognitive efficiency improve multitask performance. This part also identifies the version of the CE metric that is the most predictive of multitask performance. The last segment of the dissertation applies the CE metric in driving scenarios to evaluate novel speedometer displays; however, it finds that the most efficient display may not better enhance concurrent tracking performance in driving. Although the findings of dissertation show several limitations, they provide valuable insight into the complicated relationship among display, human cognition, and multitask performance in human-machine systems

    Stars in their eyes: What eye-tracking reveal about multimedia perceptual quality

    Get PDF
    Perceptual multimedia quality is of paramount importance to the continued take-up and proliferation of multimedia applications: users will not use and pay for applications if they are perceived to be of low quality. Whilst traditionally distributed multimedia quality has been characterised by Quality of Service (QoS) parameters, these neglect the user perspective of the issue of quality. In order to redress this shortcoming, we characterise the user multimedia perspective using the Quality of Perception (QoP) metric, which encompasses not only a user’s satisfaction with the quality of a multimedia presentation, but also his/her ability to analyse, synthesise and assimilate informational content of multimedia. In recognition of the fact that monitoring eye movements offers insights into visual perception, as well as the associated attention mechanisms and cognitive processes, this paper reports on the results of a study investigating the impact of differing multimedia presentation frame rates on user QoP and eye path data. Our results show that provision of higher frame rates, usually assumed to provide better multimedia presentation quality, do not significantly impact upon the median coordinate value of eye path data. Moreover, higher frame rates do not significantly increase level of participant information assimilation, although they do significantly improve overall user enjoyment and quality perception of the multimedia content being shown

    Pervasive and standalone computing: The perceptual effects of variable multimedia quality.

    Get PDF
    The introduction of multimedia on pervasive and mobile communication devices raises a number of perceptual quality issues, however, limited work has been done examining the 3-way interaction between use of equipment, quality of perception and quality of service. Our work measures levels of informational transfer (objective) and user satisfaction (subjective)when users are presented with multimedia video clips at three different frame rates, using four different display devices, simulating variation in participant mobility. Our results will show that variation in frame-rate does not impact a user’s level of information assimilation, however, does impact a users’ perception of multimedia video ‘quality’. Additionally, increased visual immersion can be used to increase transfer of video information, but can negatively affect the users’ perception of ‘quality’. Finally, we illustrate the significant affect of clip-content on the transfer of video, audio and textual information, placing into doubt the use of purely objective quality definitions when considering multimedia presentations

    A Cross-Case Analysis of Gender Issues in Desktop Virtual Reality Learning Environments

    Get PDF
    This study examined gender-related issues in using new desktop virtual reality (VR) technology as a learning tool in career and technical education (CTE). Using relevant literature, theory, and cross-case analysis of data and findings, the study compared and analyzed the outcomes of two recent studies conducted by a research team at Oklahoma State University that addressed gender issues in VR-based training. This cross-case analysis synthesized the results of these two studies to draw conclusions and implications for CTE educators that may assist in developing or implementing successful virtual learning environments for occupational training. The cross-study findings suggested that males and females may be differently affected by VR and that females may be less comfortable, confident, and capable in virtual learning environments, particularly when the environments are highly technical and visually complex. The findings indicate caution in the use of VR in mixed-gender CTE programs, particularly in programs that are heavily female-gendered

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities
    • 

    corecore