329 research outputs found

    The Effectiveness of Monitor-Based Augmented Reality Paradigms for Learning Space-Related Technical Tasks

    Get PDF
    Currently today there are many types of media that can help individuals learn and excel in the on going effort to acquire knowledge for a specific trait or function in a workplace, laboratory, or learning facility. Technology has advanced in the fields of transportation, information gathering, and education. The need for better recall of information is in demand in a wide variety of areas. Augmented reality (AR) is a technology that may help meet this demand. AR is a hybrid of reality and virtual reality (VR) that uses the three-dimensional location viewed through a video or optical see-through media to capture the object\u27s coordinates and add virtual images, objects, or text superimposed on the scene (Azuma, 1997). The purpose of this research is to investigate four different modes of presentation and the effect of those modes on learning and recall of information using monitor-based Augmented Reality. The four modes of presentation are Select, Observe, Interact and Print modes. Each mode possesses different attributes that may affect learning and recall. The Select mode can be described as a mode of presentation that allows movement of the work piece in front of the tracking camera. The Observe mode involves information presentation using a pre-recorded video scene presented with no interaction with the work piece. The Interact mode allows the user to view a pre-recorded video scene that allows the user to point and click on the component of the work piece with a computer mouse on the monitor. The Print mode consists of printed material of each work piece component. It was hypothesized that the Select mode would provide the user with the richest presentation of information due to information access capabilities helping to decrease work time, reduce the amount of error likelihood during usage, enhance the user\u27s motivation for learning tasks, and increase concurrent learning and performances due to recall and retention. It was predicted that the Select mode would result in trainees that would recall the greatest amount of information even after extended periods of time had elapsed. This hypothesis was not supported. No significant differences between the four groups were found

    360º hypervideo

    Get PDF
    Tese de mestrado em Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2011Nesta dissertação descrevemos uma abordagem para o design e desenvolvimento de uma interface imersiva e interactiva para a visualização e navegação de hipervídeos em 360º através da internet. Estes tipos de hipervídeos permite aos utilizadores movimentarem-se em torno de um eixo para visualizar os conteúdos dos vídeos em diferentes ângulos e acedê los de forma eficiente através de hiperligações. Desafios para a apresentação deste tipo de hipervídeos incluem: proporcionar aos utilizadores uma interface adequada que seja capaz de explorar conteúdos em 360º num ecrã normal, onde o vídeo deve mudar de perspectiva para que os utilizadores sintam que estão a olhar ao redor, e formas de navegação adequadas para compreenderem facilmente a estrutura do hipervídeo, mesmo quando as hiperligações estejam fora do alcance do campo de visão. Os dispositivos para a captura de vídeo em 360º, bem como as formas de os disponibilizar na Web, são cada vez mais comuns e acessíveis ao público em geral. Neste contexto, é pertinente explorar formas e técnicas de navegação para visualizar e interagir com hipervídeos em 360º. Tradicionalmente, para visualizar o conteúdo de um vídeo, o utilizador fica limitado à região para onde a câmara estava apontada durante a sua captura, o que significa que o vídeo resultante terá limites laterais. Com a gravação de vídeo em 360º, já não há estes limites: abrindo novas direcções a explorar. Um player de hipervídeo em 360º vai permitir aos utilizadores movimentarem-se à volta para visualizar o resto do conteúdo e aceder de forma fácil às informações fornecidas pelas hiperligações. O vídeo é um tipo de informação muito rico que apresenta uma enorme quantidade de informação que muda ao longo do tempo. Um vídeo em 360º apresenta ainda mais informações ao mesmo tempo e acrescenta desafios, pois nem tudo está dentro do nosso campo de visão. No entanto, proporciona ao utilizador uma nova experiência de visualização potencialmente imersiva. Exploramos técnicas de navegação para ajudar os utilizadores a compreenderem e navegarem facilmente um espaço de hipervídeo a 360º e proporcionar uma experiência de visualização a outro nível, através dum espaço hipermédia imersivo. As hiperligações levam o utilizador para outros conteúdos hipermédia relacionados, tais como textos, imagens e vídeos ou outras páginas na Web. Depois de terminar a reprodução ou visualização dos conteúdos relacionados, o utilizador poderá retornar à posição anterior no vídeo. Através da utilização de técnicas de sumarização, podemos ainda fornecer aos utilizadores um sumário de todo o conteúdo do vídeo para que possam visualizá-lo e compreendê-lo duma forma mais eficiente e flexível, sem necessitar de visualizar o vídeo todo em sequência. O vídeo tem provado ser uma das formas mais eficientes de comunicação, permitindo a apresentação de um leque enorme e variado de informação num curto período de tempo. Os vídeos em 360º podem fornecer ainda mais informação, podendo ser mapeados sobre projecções cilíndricas ou esféricas. A projecção cilíndrica foi inventada em 1796 pelo pintor Robert Barker de Edimburgo que obteve a sua patente. A utilização de vídeo na Web tem consistido essencialmente na sua inclusão nas páginas, onde são visualizados de forma linear, e com interacções em geral limitadas às acções de play e pause, fast forward e reverse. Nos últimos anos, os avanços mais promissores no sentido do vídeo interactivo parecem ser através de hipervídeo, proporcionando uma verdadeira integração do vídeo em espaços hipermédia, onde o conteúdo pode ser estruturado e navegado através de hiperligações definidas no espaço e no tempo e de mecanismos de navegação interactivos flexíveis. Ao estender o conceito de hipervídeo para 360º, surgem novos desafios, principalmente porque grande parte do conteúdo está fora do campo de visão. O player de hipervídeo a 360º tem que fornecer aos utilizadores mecanismos apropriados para facilitar a percepção da estrutura do hipervídeo, para navegar de forma eficiente no espaço hipervídeo a 360º e idealmente proporcionar uma experiência imersiva. Para poder navegar num espaço hipervídeo a 360º, necessitamos de novos mecanismos de navegação. Apresentamos os principais mecanismos concebidos para visualização deste tipo de hipervídeo e soluções para os principais desafios em hipermédia: desorientação e sobrecarga cognitiva, agora no contexto de 360º. Focamos, essencialmente, os mecanismos de navegação que ajudam o utilizador a orientar-se no espaço de 360º. Desenvolvemos uma interface que funciona por arrastamento para a navegação no vídeo em 360º. Esta interface permite que o utilizador movimente o vídeo para visualizar o conteúdo em diferentes ângulos. O utilizador só precisa de arrastar o cursor para a esquerda ou para a direita para movimentar o campo de visão. Pode no entanto movimentar-se apenas para um dos lados para dar a volta sem qualquer tipo de limitação. A percepção da localização e do ângulo de visualização actual tornou-se um problema devido à falta de limites laterais. Durante os nossos testes, muitos utilizadores sentiram-se perdidos no espaço de 360º, sem saber que ângulos é que estavam a visualizar. Em hipervídeo, a percepção de hiperligações é mais desafiante do que em hipermédia tradicional porque as hiperligações podem ter duração, podem coexistir no tempo e no espaço e o vídeo muda ao longo do tempo. Assim, são precisos mecanismos especiais, para torná-las perceptíveis aos utilizadores. Em hipervídeo em 360º, grande parte do conteúdo é invisível ao utilizador por não estar no campo de visão, logo será necessário estudar novas abordagens e mecanismos para indicar a existência de hiperligações. Criámos os Hotspots Availability e Location Indicators para permitir aos utilizadores saberem a existência e a localização de cada uma das hiperligações. O posicionamento dos indicadores de hotspots availabity no eixo da ordenada, nas margens laterais do vídeo, serve para indicar em que posição vertical está cada uma das hiperligações. O tamanho do indicador serve para indicar a distância do hotspot em relação ao ângulo de visualização. Quanto mais perto fica o hotspot, maior é o indicador. Os indicadores são semi-transparentes e estão posicionados nas margens laterais para minimizar o impacto que têm sobre o conteúdo do vídeo. O Mini Map também fornece informações acerca da existência e localização de hotspots, que deverão conter alguma informação do conteúdo de destino, para que o utilizador possa ter alguma expectativa acerca do que vai visualizar depois de seguir a hiperligação. Uma caixa de texto com aspecto de balão de banda desenhada permite acomodar várias informações relevantes. Quando os utilizadores seleccionam o hotspot, poderão ser redireccionados para um tempo pré-definido do vídeo ou uma página com informação adicional ou a selecção pode ser memorizada pelo sistema e o seu conteúdo ser mostrado apenas quando o utilizador desejar, dependendo do tipo de aplicação. Por exemplo, se a finalidade do vídeo for o apoio à aprendizagem (e-learning), pode fazer mais sentido abrir logo o conteúdo da hiperligação, pois os utilizadores estão habituados a ver aquele tipo de informação passo a passo. Se o vídeo for de entretenimento, os utilizadores provavelmente não gostam de ser interrompidos pela abertura do novo conteúdo, podendo optar pela memorização da hiperligação, e pelo seu acesso posterior, quando quiserem. Para além do título e da descrição do vídeo, o modo Image Map fornece uma visualização global do conteúdo do vídeo. As pré-visualizações (thumbnails) referem-se às cenas do vídeo e são representadas através duma projecção cilíndrica, para que todo o conteúdo ao longo do tempo possa ser visualizado. Permite também, de forma sincronizada, saber a cena actual e oferece ao utilizador a possibilidade de navegar para outras cenas. Toda a área de pré-visualização é sensível ao clique e determina as coordenadas da pré-visualização que o utilizador seleccionou. Uma versão mais condensada disponibiliza apenas a pré-visualização da parte central de cada uma das cenas. Permite a apresentação simultânea de um maior número de cenas, mas limita a visualização e a flexibilidade para navegar para o ângulo desejado de forma mais directa. Algumas funcionalidades também foram adicionadas à linha de tempo (timeline), ou Barra de Progresso. Para além dos tradicionais botões de Play, Pause e Tempo de Vídeo, estendemos a barra para adaptar a algumas características de uma página Web. Como é um Player desenvolvido para funcionar na internet, precisamos de ter em conta que é preciso tempo para carregar o vídeo. A barra de bytes loaded indica ao utilizador o progresso do carregamento do vídeo e não permite que o utilizador aceda às informações que ainda não foram carregadas. O hiperespaço é navegado em contextos espácio-temporais que a história recorda. A barra de memória, Memory Bar, fornece informação ao utilizador acerca das partes do vídeo que já foram visualizadas. O botão Toogle Full Screen alterna o modo de visualização do vídeo entre full e standard screen . O modo full screen leva o utilizador para fora das limitações do browser e maximiza o conteúdo do vídeo para o tamanho do ecrã. É mais um passo para um modo de visualização imersiva, por exemplo numa projecção 360º dentro duma Cave, como estamos a considerar explorar em trabalho futuro. Nesta dissertação, apresentamos uma abordagem para a visualização e interacção de vídeos em 360º. A navegação num espaço de vídeo em 360º apresenta uma nova experiência para grande parte das pessoas e não existem ainda intuições consistentes sobre o comportamento deste tipo de navegação. Os utilizadores, muito provavelmente, vão sentir o problema que inicialmente houve com o hipertexto, em que o utilizador se sentia perdido no hiperespaço. Por isso, o Player de Hipervídeo a 360º tem que ser o mais claro e eficaz possível para que os utilizadores possam interagir facilmente. O teste de usabilidade foi feito com base no questionário USE e entrevistas aos utilizadores de modo a determinar a usabilidade e experiência de acordo com os seus comentários, sugestões e preocupações sobre as funcionalidades, mecanismos de acesso ou de representação de informação fornecidos. Os resultados dos testes e comentários obtidos, permitiu-nos obter mais informação sobre a usabilidade do player e identificar as possíveis melhorias. Em resumo, os comentários dos utilizadores foram muito positivos e úteis que nos ajudará a continuar a trabalhar na investigação do Hipervídeo 360º. O trabalho futuro consiste na realização de mais testes de usabilidade e desenvolvimento de diferentes versões do Player de Hipervídeo em 360º, com mecanismos de navegação revistos e estendidos, com base nos resultados das avaliações. O Player de Hipervídeo em 360º não deverá ser apenas uma aplicação para Web, deverá poder integrar com quiosques multimédia ou outras instalações imersivas. Provavelmente serão necessárias novas funcionalidades e tipos de navegação para adaptar a diferentes contextos. O exemplo do Player de Hipervídeo em 360º apresentado neste artigo utiliza um Web browser e um rato como meio de apresentação e interacção. Com o crescimento das tecnologias de vídeo 3D, multi-toque e eye-tracking, podem surgir novas formas de visualização e de interacção com o espaço 360º. Estas novas formas trazem novos desafios mas também um potencial acrescido de novas experiências a explorar.In traditional video, the user is locked to the angle where the camera was pointing to during the capture of the video. With 360º video recording, there are no longer these boundaries, and 360º video capturing devices are becoming more common and affordable to the general public. Hypervideo stretches boundaries even further, allowing to explore the video and to navigate to related information. By extending the hypervideo concept into the 360º video, which we call 360º hypervideo, new challenges arise. Challenges for presenting this type of hypervideo include: providing users with an appropriate interface capable to explore 360º contents, where the video should change perspective so that the users actually get the feeling of looking around; and providing the appropriate affordances to understand the hypervideo structure and to navigate it effectively in a 360º hypervideo space, even when link opportunities arise in places outside the current viewport. In this thesis, we describe an approach to the design and development of an immersive and interactive interface for the visualization and navigation of 360º hypervideos. Such interface allow users to pan around to view the contents in different angles and effectively access related information through the hyperlinks. Then a user study was conducted to evaluate the 360º Hypervideo Player’s user interface and functionalities. By collecting specific and global comments, concerns and suggestions for functionalities and access mechanisms that would allow us to gain more awareness about the player usability and identify directions for improvements and finally we draw some conclusions and opens perspectives for future work

    KILO HŌKŪ: A VIRTUAL REALITY SIMULATION FOR NON-INSTRUMENT HAWAIIAN NAVIGATION

    Get PDF
    M.S

    The Future Perspectives of Immersive Learning in Maritime Education and Training

    Get PDF
    The lack of human resources in the maritime labour market provokes a rapid promotion of maritime professionals, which in turn reduces the time to acquire required skills. More than 40 % of all cases of human errors on vessels were caused by an insufficient level of training, practical skills and education of human resources. This paper represents an overview of a research evaluating a virtual reality (VR) practical training course. Main aim of this course is to evaluate the effectiveness of immersive learning implementation into maritime education and training and establish VR metrics. This research on VR training represents a metric-based view of VR experiments and research. There was developed a VR training case which is called ‘Wall wash test procedure on chemical tanker’ as an enhanced synthetic virtual reality environment for performing tasks and tests for the evaluation. A pedagogical experiment was conducted while training 115 navigator cadets at National University Odessa Maritime Academy (NU OMA). Its main goal was to establish the dynamics of changes in indicators to improve the quality of education through the use of VR. Tracking of this dynamic was done using the theory of statistics. VR experiment quantitative and qualitative analyses have confirmed support of the cognitive effort and improvement of memorization of students. Use of VR in the study of navigator cadets significantly increases the overall performance of their learning process. The effect of the user’s presence in the virtual space and the effect of depersonalization and modification of the user’s self-awareness in virtual reality gives unambiguously positive results. Thanks to specialized VR models, navigator cadets can increase the quality of mastering new knowledge by almost 26 %. Due to such improvement of professional training, it is possible to increase the general level of safety during conducting specialized vessel technological operations. The obtained researches results are very important in terms of improving the overall safety on marine vessels

    Eliciting Music Performance Anxiety of Vocal and Piano Students Through the Use of Virtual Reality

    Get PDF
    Despite the growth of virtual reality technologies, there is a lack of understanding of implementing these technologies within the collegiate classroom. This case study provides a mixed-method insight into a virtual reality (VR) asset deployed in a music performance environment. The study examined the effectiveness of a virtual reality environment as measured by the physiological response and user feedback. Ten voice and four piano college students participated in the study. Each participant performed musical works within an authentic practice room and the virtual concert hall via a Virtual Reality (VR) headset. Data was collected across four criteria. Participants’ heart rates were recorded before and after the performances. A State-Trait Anxiety Inventory test was presented to participants before and after the performances. Each performance was recorded and then blindly evaluated by two licensed music adjudicators. After the performances, participants completed a self-evaluation. Results indicated that virtual concert hall sessions caused a change in some categories of physiological, performance, and anxiety compared to an authentic practice room. No statistical difference was recorded in heart rate for vocalists between both environments. This project serves as a proof of concept that VR technologies can effectively elicit change in music performance anxiety. Furthermore, the study could encourage further research on mitigating music performance anxiety through virtual environment exposure

    The Role of Augmented Reality and Virtual Reality in Digital Learning: Comparing Matching Task Performance

    Get PDF
    University of Minnesota M.S. thesis. 2018. Major: Computer Science. Advisor: Peter Willemsen. 1 computer file (PDF); 69 pages.This paper explores the potential uses of Augmented Reality and Virtual Reality in education and learning. It is important to understand if people learn differently in unique environments such as physical, digital, Augmented Reality, and Virtual Reality. Since a lot of learning occurs in the physical and digital realm, it is important to understand the role that Virtual Reality and Augmented Reality can have in education and learning. Exploring and quantifying learning in these different environments is challenging, so our research started out with basic memorization. To begin to understand this relation, we conducted a simple matching task in these environments collecting accuracy and completion time data. Results suggest that there is no significant difference between matching performance between environments. Results also showed that there may be a difference in user interfaces that environments provide, since some environments allowed user to complete the task faster than others. Additionally, we explored annotation and collaboration in Augmented and Virtual Environments. This study presents an initial exploration of user matching performance, collaboration, and annotation in these different media environments

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    Evaluating humanoid embodied conversational agents in mobile guide applications

    Get PDF
    Evolution in the area of mobile computing has been phenomenal in the last few years. The exploding increase in hardware power has enabled multimodal mobile interfaces to be developed. These interfaces differ from the traditional graphical user interface (GUI), in that they enable a more “natural” communication with mobile devices, through the use of multiple communication channels (e.g., multi-touch, speech recognition, etc.). As a result, a new generation of applications has emerged that provide human-like assistance in the user interface (e.g., the Siri conversational assistant (Siri Inc., visited 2010)). These conversational agents are currently designed to automate a number of tedious mobile tasks (e.g., to call a taxi), but the possible applications are endless. A domain of particular interest is that of Cultural Heritage, where conversational agents can act as personalized tour guides in, for example, archaeological attractions. The visitors to historical places have a diverse range of information needs. For example, casual visitors have different information needs from those with a deeper interest in an attraction (e.g., - holiday learners versus students). A personalized conversational agent can access a cultural heritage database, and effectively translate data into a natural language form that is adapted to the visitor’s personal needs and interests. The present research aims to investigate the information needs of a specific type of visitors, those for whom retention of cultural content is important (e.g., students of history, cultural experts, history hobbyists, educators, etc.). Embodying a conversational agent enables the agent to use additional modalities to communicate this content (e.g., through facial expressions, deictic gestures, etc.) to the user. Simulating the social norms that guide the real-world human-to-human interaction (e.g., adapting the story based on the reactions of the users), should at least theoretically optimize the cognitive accessibility of the content. Although a number of projects have attempted to build embodied conversational agents (ECAs) for cultural heritage, little is known about their impact on the users’ perceived cognitive accessibility of the cultural heritage content, and the usability of the interfaces they support. In particular, there is a general disagreement on the advantages of multimodal ECAs in terms of users’ task performance and satisfaction over nonanthropomorphised interfaces. Further, little is known about what features influence what aspects of the cognitive accessibility of the content and/or usability of the interface. To address these questions I studied the user experiences with ECA interfaces in six user studies across three countries (Greece, UK and USA). To support these studies, I introduced: a) a conceptual framework based on well-established theoretical models of human cognition, and previous frameworks from the literature. The framework offers a holistic view of the design space of ECA systems b) a research technique for evaluating the cognitive accessibility of ECA-based information presentation systems that combine data from eye tracking and facial expression recognition. In addition, I designed a toolkit, from which I partially developed its natural language processing component, to facilitate rapid development of mobile guide applications using ECAs. Results from these studies provide evidence that an ECA, capable of displaying some of the communication strategies (e.g., non-verbal behaviours to accompany linguistic information etc.) found in the real-world human guidance scenario, is not affecting and effective in enhancing the user’s ability to retain cultural content. The findings from the first two studies, suggest than an ECA has no negative/positive impact on users experiencing content that is similar (but not the same) across different locations (see experiment one, in Chapter 7), and content of variable difficulty (see experiment two, in Chapter 7). However, my results also suggest that improving the degree of content personalization and the quality of the modalities used by the ECA can result in both effective and affecting human-ECA interactions. Effectiveness is the degree to which an ECA facilitates a user in accomplishing the navigation and information tasks. Similarly, affecting is the degree to which the ECA changes the quality of the user’s experience while accomplishing the navigation and information tasks. By adhering to the above rules, I gradually improved my designs and built ECAs that are affecting. In particular, I found that an ECA can affect the quality of the user’s navigation experience (see experiment three in Chapter 7), as well as how a user experiences narrations of cultural value (see experiment five, in Chapter 8). In terms of navigation, I found sound evidence that the strongest impact of the ECAs nonverbal behaviours is on the ability of users to correctly disambiguate the navigation of an ECA instructions provided by a tour guide system. However, my ECAs failed to become effective, and to elicit enhanced navigation or retention performances. Given the positive impact of ECAs on the disambiguation of navigation instructions, the lack of ECA-effectiveness in navigation could be attributed to the simulated mobile conditions. In a real outdoor environment, where users would have to actually walk around the castle, an ECA could have elicited better navigation performance, than a system without it. With regards to retention performance, my results suggest that a designer should not solely consider the impact of an ECA, but also the style and effectiveness of the question-answering (Q&A) with the ECA, and the type of user interacting with the ECA (see experiments four and six, in Chapter 8). I found that that there is a correlation between how many questions participants asked per location for a tour, and the information they retained after the completion of the tour. When participants were requested to ask the systems a specific number of questions per location, they could retain more information than when they were allowed to freely ask questions. However, the constrained style of interaction decreased their overall satisfaction with the systems. Therefore, when enhanced retention performance is needed, a designer should consider strategies that should direct users to ask a specific number of questions per location for a tour. On the other hand, when maintaining the positive levels of user experiences is the desired outcome of an interaction, users should be allowed to freely ask questions. Then, the effectiveness of the Q&A session is of importance to the success/failure of the user’s interaction with the ECA. In a natural-language question-answering system, the system often fails to understand the user’s question and, by default, it asks the user to rephrase again. A problem arises when the system fails to understand a question repeatedly. I found that a repetitive request to rephrase the same question annoys participants and affects their retention performance. Therefore, in order to ensure effective human-ECA Q&A, the repeat messages should be built in a way to allow users to figure out how to ask the system questions to avoid improper responses. Then, I found strong evidence that an ECA may be effective for some type of users, while for some others it may be not. I found that an ECA with an attention-grabbing mechanism (see experiment six, in Chapter 8), had an inverse effect on the retention performance of participants with different gender. In particular, it enhanced the retention performance of the male participants, while it degraded the retention performance of the female participants. Finally, a series of tentative design recommendations for the design of both affecting and effective ECAs in mobile guide applications in derived from the work undertaken. These are aimed at ECA researchers and mobile guide designers

    CuriosityXR: Contextualizing Learning through Immersive Mixed Reality Experiences Beyond the Classroom

    Get PDF
    The focus of education is shifting towards a learner-centered approach that highlights the importance of engagement, interaction, and personalization in learning. This thesis explores new technologies to facilitate immersive, self-directed, curiosity-driven learning experiences aimed at addressing these key factors. I explore the use of Mixed Reality (MR) to build a context-aware system that can support learners’ curiosity and improve knowledge recall. I design and build “Curiosity XR,” an application for MR headsets using a research-through-design methodology. Curiosity XR is also a platform that enables educators to create contextual multi-modal interactive mini-lessons, and learners can engage with these lessons and other AI-assisted learning content. To evaluate my design, I conduct a user participant study followed by interviews. The participants’ responses show higher levels of engagement, curiosity to learn more, and better visual retention of the learning content. I hope this work will inspire others in the MR community and advance the use of MR and AI hybrid designs for the future of curiosity-driven education

    Proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET 2013)

    Get PDF
    "This book contains the proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET) 2013 which was held on 16.-17.September 2013 in Paphos (Cyprus) in conjunction with the EC-TEL conference. The workshop and hence the proceedings are divided in two parts: on Day 1 the EuroPLOT project and its results are introduced, with papers about the specific case studies and their evaluation. On Day 2, peer-reviewed papers are presented which address specific topics and issues going beyond the EuroPLOT scope. This workshop is one of the deliverables (D 2.6) of the EuroPLOT project, which has been funded from November 2010 – October 2013 by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission through the Lifelong Learning Programme (LLL) by grant #511633. The purpose of this project was to develop and evaluate Persuasive Learning Objects and Technologies (PLOTS), based on ideas of BJ Fogg. The purpose of this workshop is to summarize the findings obtained during this project and disseminate them to an interested audience. Furthermore, it shall foster discussions about the future of persuasive technology and design in the context of learning, education and teaching. The international community working in this area of research is relatively small. Nevertheless, we have received a number of high-quality submissions which went through a peer-review process before being selected for presentation and publication. We hope that the information found in this book is useful to the reader and that more interest in this novel approach of persuasive design for teaching/education/learning is stimulated. We are very grateful to the organisers of EC-TEL 2013 for allowing to host IWEPLET 2013 within their organisational facilities which helped us a lot in preparing this event. I am also very grateful to everyone in the EuroPLOT team for collaborating so effectively in these three years towards creating excellent outputs, and for being such a nice group with a very positive spirit also beyond work. And finally I would like to thank the EACEA for providing the financial resources for the EuroPLOT project and for being very helpful when needed. This funding made it possible to organise the IWEPLET workshop without charging a fee from the participants.
    corecore