10 research outputs found

    Problematizing Second Language (L2) Learning Using Emerging VR Systems

    Get PDF
    There is little doubt that there is nothing like being immersed in the country of the language you are trying to learn. Not only do students who wish to learn English as a Second Language (ESL) enjoy the experience of inter-cultural learning contexts from a sensory and affective sense, it is often the case that they gain emotional and intellectual maturity while living abroad. The reality of travelling abroad to learn English however for many International students is often a difficult transitional one especially at pre-sessional or beginner/foundation levels in terms of language acquisition, expense, feelings of isolation while in some cases, struggling with pressures to maintain scholarships. As it stands, existing English language centres work hard to advance students onto higher levels of language competencies. They offer students opportunities to avail of further language courses, which help them progress onto undergraduate studies. As part of such programmes, colleges often plan visits to historical and cultural sites to encourage non-formal learning. Such trips often impart historical information, however, that is outside students’ immediate language levels, and this oversight does not optimise the experience as potentially pedagogical in developing competencies as outlined by the Common European Framework of Reference for Languages (CEFR). While not intending to replace present ESL courses, we propose that the use of VR systems can successfully compliment Internationalisation programmes in Ireland. The emergence of commercially available VR head-mounted displays offers opportunities for immersive ESL virtual environments. VR technology can enable spaces for creative learning structures during foundation/beginner courses by delivering VR-based learning within Irish virtual site visits from their home-based colleges. This will work to tailor courses to where students’ levels are at in actuality before they progress to their respective host English-speaking countries at higher levels in class-based environments. While in Ireland, it is envisaged that the VR supports will facilitate visits to on-site locations that are followed up by virtual site equivalents to maximise language learning in structured, innovative ways. VR can also engage with online colleges that do not have a physical campus in offering students a diversity of online courses while offering students the option to stay at home to best suit their own personal life situations. A collaborative project between researchers at Limerick Institute of Technology and Hibernia College Dublin aims to capture the structural and acoustic data of various historical buildings and iconic landmarks in Ireland. The acquisition of structural features will involve the use of a 3D laser scanner and a record of construction materials. The acquisition of acoustic data will involve measuring the impulse response of the space using a dodecahedron speaker, reference and binaural microphones. Using this data, digital equivalents incorporating spatial attributes of both auditory and visual modalities will be rendered for the Oculus Rift VR headset and standard headphones. These renders will seek to position both the ESL learner and English language lecturer at virtual Irish historical sites to articulate immersive learning to find full expression in realising the digital campus

    Playful interaction for learning collaboratively and individually

    Full text link
    Playful interactions facilitate the development of engaging applications for different purposes. This aspect is very important for serious games, and especially when these games are for children. Another aspect to consider is the interaction among children, which could be a great reinforcement in learning environments. Children enjoy playing, and they like playing with other children. This relationship could encourage their motivation and their learning outcomes. In this paper, a playful interaction system for learning about a period of history is presented. The interaction of the system was achieved using natural gestures and the visuali-zation was autostereoscopic. A study was carried out to determine whether their learning outcomes were greater playing collabo-ratively or playing individually. Forty six children from 7 to 10 years old participated in the study. The analysis of the pre-tests and the post-tests indicate that the children increased their knowledge about historical periods after playing with the two modes. Therefore, the game could be used as an effective transmitter of knowledge both collaboratively and individually. When the post-knowledge scores for the two modes were compared, statistically significant differences were found in favor of the collaborative mode. Therefore, the collaborative mode facilitates learning to a greater extent than the individual mode. The rest of the questions indicated that the children had a lot of fun while playing the game; they found the game easy to play; they would recommend the game to their friends; and they scored the game as a mean of 9.57 over 10. Finally, we believe that the combination of playful interaction and autostereoscopy is an option that should be exploited not only for the development of computer-supported learning systems, but also for the development of systems for different purposesThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02-01).Martín San José, JF.; Juan Lizandra, MC.; Torres, E.; Vicent López, MJ. (2014). Playful interaction for learning collaboratively and individually. Journal of Ambient Intelligence and Smart Environments. 6(3):295-311. https://doi.org/10.3233/AIS-140257S2953116

    An Experimental Mixed Methods Pilot Study for U.S. Army Infantry Soldiers - Higher Levels of Combined Immersion and Embodiment in Simulation-Based Training Capabilities Show Positive Effects on Emotional Impact and Relationships to Learning Outcomes

    Get PDF
    This pilot study examines the impact of combined immersion and embodiment on learning and emotional outcomes. The results are intended to better enable U.S. Army senior leaders to decide if dismounted infantry Soldiers would benefit from a more immersive simulation-based training capability. The experiment\u27s between-subject design included a sample of 15 participants randomly assigned to one of three system configurations representing different levels of combined immersion and embodiment. The control group was a typical desktop, and the two experimental groups were a typical configuration of a Virtual Reality headset (VR) and a novel configuration using VR supported by an omnidirectional treadmill (ODT) for full body exploration and interaction. Unique from similar studies, this pilot study allows for an analysis of the Infinadeck ODT\u27s impact on learning outcomes and the value of pairing tasks by type with various levels of immersion. Each condition accessed the same realistically modeled geospatial virtual environment (VE), the UCF Virtual Arboretum, and completed the same pre and post VE-interaction measurement instruments. These tests included complicated and complex information. Declarative information involved listing plants/communities native to central Florida (complicated tasks) while the situational awareness measurement required participants to draw a sketch map (complex task). The Kruskal-Wallis non-parametric statistical test showed no difference between conditions on learning outcomes. The non-parametric Spearman correlation statistical test showed many significant relationships between the system configuration and emotional outcomes. Graphical representations of the data combined with quantitative, qualitative, and correlational data suggest a larger sample size is required to increase power to answer this research question. This study found a strong trend which indicates learning outcomes are affected by task type and significant correlations between emotions important for learning outcomes increased with combined immersion and embodiment

    MIFTel: a multimodal interactive framework based on temporal logic rules

    Get PDF
    Human-computer and multimodal interaction are increasingly used in everyday life. Machines are able to get more from the surrounding world, assisting humans in different application areas. In this context, the correct processing and management of signals provided by the environments is determinant for structuring the data. Different sources and acquisition times can be exploited for improving recognition results. On the basis of these assumptions, we are proposing a multimodal system that exploits Allen’s temporal logic combined with a prevision method. The main object is to correlate user’s events with system’s reactions. After post-elaborating coming data from different signal sources (RGB images, depth maps, sounds, proximity sensors, etc.), the system is managing the correlations between recognition/detection results and events in real-time to create an interactive environment for the user. For increasing the recognition reliability, a predictive model is also associated with the proposed method. The modularity of the system grants a full dynamic development and upgrade with custom modules. Finally, a comparison with other similar systems is shown, underlining the high flexibility and robustness of the proposed event management method

    Seeing the Unseen: Interactive Narrative as a Tool for Understanding Invisible Disabilities

    Get PDF
    Understanding invisible disabilities has become a more consistent conversation in recent years. While this conversation has included developing better medical treatment and legislative progress for accessibility requirements, there remains progress to be made in comprehending how these invisible disabilities impact a person\u27s daily life. This dissertation research focuses on how informal learning tools such as interactive narratives could be developed to provide insight into the ways an invisible disability influences a person\u27s day that otherwise goes unnoticed by the mainstream population. To provide this insight, an advanced prototype of an interactive story titled Under the Rock was developed and based upon the researcher\u27s experiences growing up and living with advanced hearing loss. The narrative used for Under the Rock was drafted using an autoethnography methodology to capture key moments from the researcher\u27s memories and translate them into a game narrative. Following a choose-your-own-adventure story format, users are asked to guide the main character through a series of situations that are directly influenced by her hearing loss and learn about the impact of each choice. To test the efficacy of Under the Rock, a survey study was conducted to determine what users most commonly took away from the game experience. This survey study found that a majority of users began with an expected baseline of minimal knowledge about hearing loss, and after completing the study many reported a deeper understanding of hearing loss within the context of everyday life. While there is a great deal of work to build upon within this research, this advanced prototype of Under the Rock and accompanying survey study show promising initial results in developing effective informal learning resources for educating mainstream populations about invisible disabilities and their impact on daily life
    corecore