5,118 research outputs found

    [DC] self-adaptive technologies for immersive trainings

    Get PDF
    Online learning is the preferred option for professional training, e.g. Industry 4.0 or e-health, because it is more cost efficient than on-site organisation of realistic training sessions. However, current online learning technologies are limited in terms of personalisation, interactivity and immersiveness that are required by applications such as surgery and pilot training. Virtual Reality (VR) technologies have the potential to overcome these limitations. However, due to its early stage of research, VR requires significant improvements to fully unlock its potential. The focus of this PhD is to tackle research challenges to enable VR for online training in three dimensions: (1) dynamic adaptation of the training content for personalised trainings, by incorporating prior knowledge and context data into self-learning algorithms; (2) mapping of sensor data onto what happens in the VR environment, by focusing on motion prediction techniques that use past movements of the users, and (3) investigating immersive environments with intuitive interactions, by gaining a better understanding of human motion in order to improve interaction. The designed improvements will be characterised though a prototype VR training platform for multiple use cases. This work will not only advance the state of the art on VR training, but also on online e-learning applications in general

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    The usage of fully immersive head-mounted displays in social everyday contexts

    Get PDF
    Technology often evolves from decades of research in university and industrial laboratories and changes people's lives when it becomes available to the masses. In the interaction between technology and consumer, established designs in the laboratory environment must be adapted to the needs of everyday life. This paper deals with the challenges arising from the development of fully immersive Head Mounted Displays (HMD) in laboratories towards their application in everyday contexts. Research on virtual reality (VR) technologies spans over 50 years and covers a wide field of topics, e.g., technology, system design, user interfaces, user experience or human perception. Other disciplines such as psychology or the teleoperation of robots are examples for users of VR technology. The work in the previous examples was mainly carried out in laboratories or highly specialized environments. The main goal was to generate systems that are ideal for a single user to conduct a particular task in VR. The new emerging environments for the use of HMDs range from private homes to offices to convention halls. Even in public spaces such as public transport, cafés or parks, immersive experiences are possible. However, current VR systems are not yet designed for these environments. Previous work on problems in the everyday environment deals with challenges such as preventing the user from colliding with a physical object. However, current research does not take into account the new social context for an HMD user associated with these environments. Several people who have different roles are around the user in these contexts. In contrast to laboratory scenarios, the non-HMD user, for example, does not share the task with or is aware of the state of the HMD user in VR. This thesis contributes to the challenges introduced by the social context. For this purpose I offer solutions to overcome the visual separation of the HMD user. I also suggest methods for investigating and evaluating the use of HMDs suitable for everyday context. First, we present concepts and insights to overcome the challenges arising from an HMD covering the user's face. In the private context, e.g., living rooms, one of the main challenges is the need for an HMD user to take off the HMD to be able to communicate with others. Reasons for taking off the HMD are the visual exclusion of the surrounding world for HMD users and the HMD covering the users' face, hindering communication. Additionally, the Non-HMD users do not know about the virtual world the HMD user is acting in. Previous work suggests to visualize the bystanding Non-HMD user or its actions in VR to address such challenges. The biggest advantage of a fully immersive experience, however, is the full separation from the physical surrounding with the ultimate goal of being at another place. Therefore I argue not to integrate a non-HMD users directly into VR. I introduce the approach of using a shared surface that provides a common basis for information and interaction between a non-HMD and a HMD user. Such a surface can be utilized by using a smartphone. The same information is presented to the HMD in VR and the Non-HMD user on the shared surface in the same physical position, enabling joint interaction at the surface. By examining four feedback modalities, we provide design guidelines for touch interaction. The guidelines support interaction design with such a shared surface by an HMD user. Further, we explore the possibility to inform the Non-HMD user about the user's state during a mixed presence collaboration, e.g., if the HMD user is inattentive to the real world. For this purpose I use a frontal display attached to the HMD. In particular we explore the challenges of disturbed socialness and reduced collaboration quality, by presenting the users state on the front facing display. In summary, our concepts and studies explore the application of a shared surface to overcome challenges in a co-located mixed presence collaboration. Second, we look at the challenges of using HMDs in a public environment that have not yet been considered. The use of HMDs in these environments is becoming a reality due to the current development of HMDs, which contain all necessary hardware in one portable device. Related work, in particular, the work on public displays, already addresses the interaction with technology in public environments. The form factor of the HMD, the need to take an HMD onto the head and especially the visual and mental exclusion of the HMD user are new and not yet understood challenges in these environments. We propose a problem space for semi-public (e.g., conference rooms) and public environments (e.g., market places). With an explorative field study, we gain insight into the effects of the visual and physical separation of an HMD user from surrounding Non-HMD users. Further, we present a method that helps to design and evaluate the unsupervised usage of HMDs in public environments, the \emph{audience funnel flow model for HMDs}. Third, we look into methods that are suitable to monitor and evaluate HMD-based experiences in the everyday context. One core measure is the experience of being present in the virtual world, i.e., the feeling of ``being there''. Consumer-grade HMDs are already able to create highly immersive experiences, leading to a strong presence experience in VR. Hence we argue it is important to find and understand the remaining disturbances during the experience. Existing methods from the laboratory context are either not precise enough, e.g, questionnaires, to find these disturbances or cause high effort in their application and evaluation, e.g., physiological measures. In a literature review, we show that current research heavily relies on questionnaire-based approaches. I improve current qualitative approaches -- interviews, questionnaires -- to make the temporal variation of a VR experience assessable. I propose a drawing method that recognizes breaks in the presence experience. Also, it helps the user in reflecting an HMD-based experience and supports the communication between an interviewer and the HMD user. In the same paper, we propose a descriptive model that allows the objective description of the temporal variations of a presence experience from beginning to end. Further, I present and explore the concept of using electroencephalography to detect an HMD user's visual stress objectively. Objective detection supports the usage of HMDs in private and industrial contexts, as it ensures the health of the user. With my work, I would like to draw attention to the new challenges when using virtual reality technologies in everyday life. I hope that my concepts, methods and evaluation tools will serve research and development on the usage of HMDs. In particular, I would like to promote the use in the everyday social context and thereby create an enriching experience for all.Technologie entwickelt sich oft aus jahrzehntelanger Forschung in Universitäts- und Industrielabors und verändert das Leben der Menschen, wenn sie für die Masse verfügbar wird. Im Zusammenspiel von Technik und Konsument müssen im Laborumfeld etablierte Designs an die Bedürfnisse des Alltags angepasst werden. Diese Arbeit beschäftigt sich mit den Herausforderungen, die sich aus der Entwicklung voll immersiver Head Mounted Displays (HMD) in Labors, hin zu ihrer Anwendung im täglichen Kontext ergeben. Die Forschung zu Virtual-Reality-Technologien erstreckt sich über mehr als 50 Jahre und deckt ein breites Themenspektrum ab, wie zum Beispiel Technologie, Systemdesign, Benutzeroberflächen, Benutzererfahrung oder menschliche Wahrnehmung. Andere Disziplinen wie die Psychologie oder die Teleoperation von Robotern sind Beispiele für Anwender von VR Technologie. in der Vergangenheit Arbeiten wurden Arbeiten mit VR Systemen überwiegend in Labors oder hochspezialisierten Umgebungen durchgeführt. Der Großteil dieser Arbeiten zielte darauf ab, Systeme zu generieren, die für einen einzigen Benutzer ideal sind, um eine bestimmte Aufgabe in VR durchzuführen. Die neu aufkommenden Umgebungen für den Einsatz von HMDs reichen vom privaten Haushalt über Büros bis hin zu Kongresssälen. Auch in öffentlichen Räumen wie öffentlichen Verkehrsmitteln, Cafés oder Parks sind immersive Erlebnisse möglich. Allerdings sind die aktuellen VR Systeme noch nicht für diese Umgebungen ausgelegt. Vorangegangene Arbeiten zu den Problemen im Alltags Umfeld befassen sich daher mit Herausforderungen, wie der Vermeidung von Kollisionen des Benutzers mit einem physischen Objekt. Die aktuelle Forschung berücksichtigt allerdings nicht den neuen sozialen Kontext für einen HMD-Anwender, der mit den Alltagsumgebungen verbunden ist. Mehrere Personen, die unterschiedliche Rollen haben, sind in diesen Kontexten um den Benutzer herum. Im Gegensatz zu Szenarien im Labor teilt der Nicht-HMD-Benutzer beispielsweise nicht die Aufgabe und ist sich nicht über den Zustand des HMD-Benutzers in VR bewusst. Diese Arbeit trägt zu den Herausforderungen bei, die durch den sozialen Kontext eingeführt werden. Zu diesem Zweck bieten ich in meiner Arbeit Lösungen an, um die visuelle Abgrenzung des HMD-Anwenders zu überwinden. Ich schlage zudem Methoden zur Untersuchung und Bewertung des Einsatzes von HMDs in öffentlichen Bereichen vor. Zuerst präsentieren wir Konzepte und Erkenntnisse, um die Herausforderungen zu meistern, die sich durch das HMD ergeben, welches das Gesicht des Benutzers abdeckt. Im privaten Bereich, z.B. in Wohnzimmern, ist eine der größten Herausforderungen die Notwendigkeit, dass der HMD-Nutzer das HMD abnimmt, um mit anderen kommunizieren zu können. Gründe für das Abnehmen des HMDs sind die visuelle Ausgrenzung der Umgebung für die HMD-Anwender und das HMD selbst, welches das Gesicht des Anwenders bedeckt und die Kommunikation behindert. Darüber hinaus wissen die Nicht-HMD-Benutzer nichts über die virtuelle Welt, in der der HMD-Benutzer handelt. Bisherige Konzepte schlugen vor, den Nicht-HMD-Benutzer oder seine Aktionen in VR zu visualisieren, um diese Herausforderungen zu adressieren. Der größte Vorteil einer völlig immersiven Erfahrung ist jedoch die vollständige Trennung der physischen Umgebung mit dem ultimativen Ziel, an einem anderen Ort zu sein. Daher schlage ich vor die Nicht-HMD-Anwender nicht direkt in VR einzubinden. Stattdessen stelle ich den Ansatz der Verwendung einer geteilten Oberfläche vor, die eine gemeinsame Grundlage für Informationen und Interaktion zwischen einem Nicht-HMD und einem HMD-Benutzer bietet. Eine geteile Oberfläche kann etwa durch die Verwendung eines Smartphones realisiert werden. Eine solche Oberfläche präsentiert dem HMD und dem Nicht-HMD-Benutzer an der gleichen physikalischen Position die gleichen Informationen. Durch die Untersuchung von vier Feedbackmodalitäten stellen wir Designrichtlinien zur Touch-Interaktion zur Verfügung. Die Richtlinien ermöglichen die Interaktion mit einer solchen geteilten Oberfläche durch einen HMD-Anwender ermöglichen. Weiterhin untersuchen wir die Möglichkeit, den Nicht-HMD-Benutzer während einer Zusammenarbeit über den Zustand des HMD Benutzers zu informieren, z.B., wenn der HMD Nutzer gegenüber der realen Welt unachtsam ist. Zu diesem Zweck schlage ich die Verwendung eines frontseitigen Displays, das an dem HMD angebracht ist. Zusätzlich bieten unsere Studien Einblicke, die den Designprozess für eine lokale, gemischt präsente Zusammenarbeit unterstützen. Zweitens betrachten wir die bisher unberücksichtigten Herausforderungen beim Einsatz von HMDs im öffentlichen Umfeld. Ein Nutzung von HMDs in diesen Umgebungen wird durch die aktuelle Entwicklung von HMDs, die alle notwendige Hardware in einem tragbaren Gerät enthalten, zur Realität. Verwandte Arbeiten, insbesondere aus der Forschung an Public Displays, befassen sich bereits mit der Nutzung von Display basierter Technologien im öffentlichen Kontext. Der Formfaktor des HMDs, die Notwendigkeit ein HMD auf den Kopf zu Ziehen und vor allem die visuelle und mentale Ausgrenzung des HMD-Anwenders sind neue und noch nicht verstanden Herausforderung in diesen Umgebungen. Ich schlage einen Design Space für halböffentliche (z.B. Konferenzräume) und öffentliche Bereiche (z.B. Marktplätze) vor. Mit einer explorativen Feldstudie gewinnen wir Einblicke in die Auswirkungen der visuellen und physischen Trennung eines HMD-Anwenders von umliegenden Nicht-HMD-Anwendern. Weiterhin stellen wir eine Methode vor, die unterstützt, den unbeaufsichtigten Einsatz von HMDs in öffentlichen Umgebungen zu entwerfen und zu bewerten, das \emph{audience funnel flow model for HMDs}. Drittens untersuchen wir Methoden, die geeignet sind, HMD-basierte Erfahrungen im Alltagskontext zu überwachen und zu bewerten. Eine zentrale Messgröße ist die Erfahrung der Präsenz in der virtuellen Welt, d.h. das Gefühl des "dort seins". HMDs für Verbraucher sind bereits in der Lage, hoch immersive Erlebnisse zu schaffen, was zu einer starken Präsenzerfahrung im VR führt. Daher argumentieren wir, dass es wichtig ist, die verbleibenden Störungen während der Erfahrung zu finden und zu verstehen. Bestehende Methoden aus dem Laborkontext sind entweder nicht präzise genug, z.B. Fragebögen, um diese Störungen zu finden oder verursachen einen hohen Aufwand in ihrer Anwendung und Auswertung, z.B. physilogische Messungen. In einer Literaturübersicht zeigen wir, dass die aktuelle Forschung stark auf fragebogenbasierte Ansätze angewiesen ist. Ich verbessern aktuelle qualitative Ansätze -- Interviews, Fragebögen -- um die zeitliche Variation einer VR-Erfahrung bewertbar zu machen. Ich schlagen eine Zeichnungsmethode vor die Brüche in der Präsenzerfahrung erkennt, den Benutzer bei der Reflexion einer HMD-basierten Erfahrung hilft und die Kommunikation zwischen einem Interviewer und dem HMD-Benutzer unterstützt. In der gleichen Veröffentlichung schlage ich ein Modell vor, das die objektive Beschreibung der zeitlichen Variationen einer Präsenzerfahrung von Anfang bis Ende ermöglicht. Weiterhin präsentieren und erforschen ich das Konzept der Elektroenzephalographie, um den visuellen Stress eines HMD-Anwenders objektiv zu erfassen. Die objektive Erkennung unterstützt den Einsatz von HMDs im privaten und industriellen Kontext, da sie die Gesundheit des Benutzers sicherstellt. Mit meiner Arbeit möchte ich auf die neuen Herausforderungen beim Einsatz von VR-Technologien im Alltag aufmerksam machen. Ich hoffe, dass meine Konzepte, Methoden und Evaluierungswerkzeuge der Forschung und Entwicklung über den Einsatz von HMDs dienen werden. Insbesondere möchte ich den Einsatz im alltäglichen sozialen Kontext fördern und damit eine bereichernde Erfahrung für alle schaffen

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360â—¦ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que controla parâmetros para modificações visuais como o campo de visão restringido. Estes artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não- diegética pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da música foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD), onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa. A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra- balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um espaço de design rico para a interação e narrativa

    A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays

    Get PDF
    We identify usability challenges facing consumers adopting Virtual Reality (VR) head-mounted displays (HMDs) in a survey of 108 VR HMD users. Users reported significant issues in interacting with, and being aware of their real-world context when using a HMD. Building upon existing work on blending real and virtual environments, we performed three design studies to address these usability concerns. In a typing study, we show that augmenting VR with a view of reality significantly corrected the performance impairment of typing in VR. We then investigated how much reality should be incorporated and when, so as to preserve users’ sense of presence in VR. For interaction with objects and peripherals, we found that selectively presenting reality as users engaged with it was optimal in terms of performance and users’ sense of presence. Finally, we investigated how this selective, engagement-dependent approach could be applied in social environments, to support the user’s awareness of the proximity and presence of others

    Effects of Character Guide in Immersive Virtual Reality Stories

    Get PDF
    Bringing cinematic experiences from traditional film screens into Virtual Reality (VR) has become an increasingly popular form of entertainment in recent years. VR provides viewers unprecedented film experience that allows them to freely explore around the environment and even interact with virtual props and characters. For the audience, this kind of experience raises their sense of presence in a different world, and may even stimulate their full immersion in story scenarios. However, different from traditional film-making, where the audience is completely passive in following along director’s decisions of storytelling, more freedom in VR might cause viewers to get lost on halfway watching a series of events that build up a story. Therefore, striking a balance between user interaction and narrative progression is a big challenge for filmmakers. To assist in organizing the research space, we presented a media review and the resulting framework to characterize the primary differences among different variations of film, media, games, and VR storytelling. The evaluation in particular provided us with knowledge that were closely associated with story-progression strategies and gaze redirection methods for interactive content in the commercial domain. Following the existing VR storytelling framework, we then approached the problem of guiding the audience through the major events of a story by introducing a virtual character as a travel companion who provides assistance in directing the viewer’s focus to the target scenes. The presented research explored a new technique that allowed a separate virtual character to be overlaid on top of an existing 360-degree video such that the added character react based on the head-tracking data to help indicate to the viewer the core focal content of the story. The motivation behind this research is to assist directors in using a virtual guiding character to increase the effectiveness of VR storytelling, assuring that viewers fully understand the story through completing a sequence of events, and possibly realize a rich literary experience. To assess the effectiveness of this technique, we performed a controlled experiment by applying the method in three immersive narrative experiences, each with a control condition that was free ii from guidance. The experiment compared three variations of the character guide: 1) no guide; 2) a guide with an art style similar to the style of the video design; and 3) a character guide with a dissimilar style. All participants viewed the narrative experiences to test whether a similar art style led to better gaze behaviors that had higher likelihood of falling on the intended focus regions of the 360-degree range of the Virtual Environment (VE). By the end of the experiment, we concluded that adding a virtual character that was independent from the narrative had limited effects on users’ gaze performances when watching an interactive story in VR. Furthermore, the implemented character’s art style made very few difference to users’ gaze performance as well as their level of viewing satisfaction. The primary reason could be due to limitation of the implementation design. Besides this, the guiding body language designed for an animal character caused certain confusion for numerous participants viewing the stories. In the end, the character guide approaches still provided insights for future directors and designers into how to draw the viewers’ attention to a target point within a narrative VE, including what can work well and what should be avoide

    Classic Driver VR

    Get PDF
    A VR car-driving simulator for evaluating the user experience of new drivers by helping them to learn driving rules and regulations. The Classic VR Driver helps new drivers to learn driving rules and regulations using various audio and visual feedback. The simulator helps them to get acquainted with the risks and mistakes associated with real life driving. In addition, the users have to play the game in an immersive environment using a Virtual Reality system. This project attempts to fulfill two important goals. The major goal is to evaluate whether the user can learn driving rules and regulations of the road. The game allows the users to take a road test. The road test determines the type of mistakes the user makes and it also determines if they passed or failed in it. I have conducted A/B testing and let the testers participate in user-interviews and user-survey. The testing procedure allowed me to analyze the effectiveness of learning driving rules from the simulator as compared to learning rules from the RMV (Registry of Motor Vehicles) manual. Secondly, the user experience was evaluated by allowing users to participate in user-interviews and user-surveys. It helped me to understand the positives and drawbacks of the game. These feedback are taken into consideration for future improvement. All these factors were considered to make the game as enjoyable and useful in terms of skill training

    User Experience in Virtual Reality, conducting an evaluation on multiple characteristics of a Virtual Reality Experience

    Get PDF
    Virtual Reality applications are today numerous and cover a wide range of interests and tastes. As popularity of Virtual Reality increases, developers in industry are trying to create engrossing and exciting experiences that captivate the interest of users. User-Experience, a term used in the field of Human-Computer Interaction and Interaction Design, describes multiple characteristics of the experience of a person interacting with a product or a system. Evaluating User-Experience can provide valuable insight to developers and researchers on the thoughts and impressions of the end users in relation to a system. However, little information exists regarding on how to conduct User-Experience evaluations in the context of Virtual Reality. Consecutively, due to the numerous parameters that influence User-Experience in Virtual Reality, conducting and organizing evaluations can be overwhelming and challenging. The author of this thesis investigated how to conduct a User-Experience evaluation on multiple aspects of a Virtual Reality headset by identifying characteristics of the experience, and the methods that can be used to measure and evaluate them. The data collected was both qualitative and quantitative to cover a wide range of characteristics of the experience. Furthermore, the author applied usability testing, think-aloud protocol, questionnaires and semi-structured interview as methods to observe user behavior and collect information regarding the aspects of the Virtual Reality headset. The testing session described in this study included 14 participants. Data from this study showed that the combination of chosen methods were able to provide adequate information regarding the experience of the users despite encountered difficulties. Additionally, this thesis showcases which methods were used to evaluate specific aspects of the experience and the performance of each method as findings of the study

    Spatial guiding through haptic cues in omnidirectional video

    Get PDF
    Omnidirectional video’s extensive amount of visual information challenges the users to find and stay focused on the essential parts of the video. I examined how user experience was affected when haptic cues in the head area are used to guide the viewer’s gaze towards the essential parts of omnidirectional video. User experiences with different omnidirectional video types combined with haptic guiding were compared and analyzed. Other part of the research was aimed to find out how haptic and auditory modalities and their combination affected the user experience. The participants used an Oculus Rift headset to watch omnidirectional video material and two actuators were placed on their forehead to indicate if the essential part located in the left or right direction. The results of the questionnaires and the comments showed that haptic guiding was useful and effective, though it was not experienced as a necessary feature during easy to follow and slow-paced videos. The combination of haptic guiding and audio was rated the most positive use of modalities. This feature has a lot of potential to enhance user experience of omnidirectional videos. Further studies on the long-term usage of the feature are required to eliminate the novelty effect and gain a more accurate understanding of the users’ needs

    Exploring the impact of 360° movie cuts in users' attention

    Get PDF
    Virtual Reality (VR) has grown since the first devices for personal use became available on the market. However, the production of cinematographic content in this new medium is still in an early exploratory phase. The main reason is that cinematographic language in VR is still under development, and we still need to learn how to tell stories effectively. A key element in traditional film editing is the use of different cutting techniques, in order to transition seamlessly from one sequence to another. A fundamental aspect of these techniques is the placement and control over the camera. However, VR content creators do not have full control of the camera. Instead, users in VR can freely explore the 360° of the scene around them, which potentially leads to very different experiences. While this is desirable in certain applications such as VR games, it may hinder the experience in narrative VR. In this work, we perform a systematic analysis of users'' viewing behavior across cut boundaries while watching professionally edited, narrative 360° videos. We extend previous metrics for quantifying user behavior in order to support more complex and realistic footage, and we introduce two new metrics that allow us to measure users'' exploration in a variety of different complex scenarios. From this analysis, (i) we confirm that previous insights derived for simple content hold for professionally edited content, and (ii) we derive new insights that could potentially influence VR content creation, informing creators about the impact of different cuts in the audience's behavior
    • …
    corecore