12 research outputs found
QUALINET white paper on definitions of Immersive Media Experience (IMEx)
With the coming of age of virtual/augmented reality and interactive media,
numerous definitions, frameworks, and models of immersion have emerged across
different fields ranging from computer graphics to literary works. Immersion is
oftentimes used interchangeably with presence as both concepts are closely
related. However, there are noticeable interdisciplinary differences regarding
definitions, scope, and constituents that are required to be addressed so that
a coherent understanding of the concepts can be achieved. Such consensus is
vital for paving the directionality of the future of immersive media
experiences (IMEx) and all related matters.
The aim of this white paper is to provide a survey of definitions of
immersion and presence which leads to a definition of immersive media
experience (IMEx). The Quality of Experience (QoE) for immersive media is
described by establishing a relationship between the concepts of QoE and IMEx
followed by application areas of immersive media experience. Influencing
factors on immersive media experience are elaborated as well as the assessment
of immersive media experience. Finally, standardization activities related to
IMEx are highlighted and the white paper is concluded with an outlook related
to future developments
Educational practices and strategies with immersive learning environments: mapping of reviews for using the metaverse
The educational metaverse promises fulfilling ambitions of immersive learning, leveraging technology-based presence alongside narrative and/or challenge-based deep mental absorption. Most reviews of immersive learning research were outcomes-focused, few considered the educational practices and strategies. These are necessary to provide theoretical and pedagogical frameworks to situate outcomes within a context where technology is in concert with educational approaches. We sought a broader perspective of the practices and strategies used in immersive learning environments, and conducted a mapping survey of reviews, identifying 47 studies. Extracted accounts of educational practices and strategies under thematic analysis yielded 45 strategies and 21 practices, visualized as a network clustered by conceptual proximity. Resulting clusters “Active context”, “Collaboration”, “Engagement and Scaffolding”, “Presence”, and “Real and virtual multimedia learning” expose the richness of practices and strategies within the field. The visualization maps the field, supporting decision-making when combining practices and strategies for using the metaverse in education, highlights which practices and strategies are supported by the literature, and the presence and absence of diversity within clusters.info:eu-repo/semantics/acceptedVersio
Quality of experience in telemeetings and videoconferencing: a comprehensive survey
Telemeetings such as audiovisual conferences or virtual meetings play an increasingly important role in our professional and private lives. For that reason, system developers and service providers will strive for an optimal experience for the user, while at the same time optimizing technical and financial resources. This leads to the discipline of Quality of Experience (QoE), an active field originating from the telecommunication and multimedia engineering domains, that strives for understanding, measuring, and designing the quality experience with multimedia technology. This paper provides the reader with an entry point to the large and still growing field of QoE of telemeetings, by taking a holistic perspective, considering both technical and non-technical aspects, and by focusing on current and near-future services. Addressing both researchers and practitioners, the paper first provides a comprehensive survey of factors and processes that contribute to the QoE of telemeetings, followed by an overview of relevant state-of-the-art methods for QoE assessment. To embed this knowledge into recent technology developments, the paper continues with an overview of current trends, focusing on the field of eXtended Reality (XR) applications for communication purposes. Given the complexity of telemeeting QoE and the current trends, new challenges for a QoE assessment of telemeetings are identified. To overcome these challenges, the paper presents a novel Profile Template for characterizing telemeetings from the holistic perspective endorsed in this paper
Streaming and User Behaviour in Omnidirectional Videos
Omnidirectional videos (ODVs) have gone beyond the passive paradigm of traditional video,
offering higher degrees of immersion and interaction. The revolutionary novelty of this technology is the possibility for users to interact with the surrounding environment, and to feel a
sense of engagement and presence in a virtual space. Users are clearly the main driving force of
immersive applications and consequentially the services need to be properly tailored to them.
In this context, this chapter highlights the importance of the new role of users in ODV streaming applications, and thus the need for understanding their behaviour while navigating within
ODVs. A comprehensive overview of the research efforts aimed at advancing ODV streaming
systems is also presented. In particular, the state-of-the-art solutions under examination in this
chapter are distinguished in terms of system-centric and user-centric streaming approaches: the
former approach comes from a quite straightforward extension of well-established solutions for
the 2D video pipeline while the latter one takes the benefit of understanding users’ behaviour
and enable more personalised ODV streaming
Real-time affect detection in virtual reality: a technique based on a three-dimensional model of affect and EEG signals
This manuscript explores the development of a technique for detecting the affective states of Virtual Reality (VR) users in real-time. The technique was tested with data from an experiment where 18 participants observed 16 videos with emotional content inside a VR home theater, while their electroencephalography (EEG) signals were recorded. Participants evaluated their affective response toward the videos in terms of a three-dimensional model of affect. Two variants of the technique were analyzed. The difference between both variants was the method used for feature selection. In the first variant, features extracted from the EEG signals were selected using Linear Mixed-Effects (LME) models. In the second variant, features were selected using Recursive Feature Elimination with Cross Validation (RFECV). Random forest was used in both variants to build the classification models. Accuracy, precision, recall and F1 scores were obtained by cross-validation. An ANOVA was conducted to compare the accuracy of the models built in each variant. The results indicate that the feature selection method does not have a significant effect on the accuracy of the classification models. Therefore, both variations (LME and RFECV) seem equally reliable for detecting affective states of VR users. The mean accuracy of the classification models was between 87% and 93%
Hands-off Interactive Storytelling in Cinematic Virtual Reality
This is a research by creative practice that aims to explore a form of hands-off interactivity in cinematic virtual reality (CVR). The proposed model for interactive storytelling is based more on intuitive reactions than on conscious decision-making, enhancing diegetic and, thus, narrative immersion. The initial hypothesis states that hands-off interactivity can allow a user to experience a diegesis in which they can avoid being “pulled-back” from the immersion, an interruption of the story produced by the consciousness of explicit interaction and extra-diegetic interfaces. To achieve this, this project uses immersion, spatial storytelling, and dramatically-motivated soundscapes to facilitate and encourage navigation through simultaneous acoustic and dramatic spaces in one immersive environment. Using this setup, the interactive storytelling takes place as users are presented with two simultaneous storylines with their respective protagonists, which happen to be interdependent, influence each other, and are part of one integral story. Users would then be able to freely navigate and alternate between the two storylines – being influenced by strategically designed visual and acoustic diegetic stimuli – and thus play an active role in getting to make sense of the narration. This way, users generate inputs with organic movements around the fixed axis in which CVR uses are placed.
This research is strongly focused on creative practice, the generation of creative outputs, and the analysis of the procedures and production workflows, to understand what are the creative and technical challenges for the proposed type of interactive storytelling. The project is also faced from an interdisciplinary approach that, while centred in a filmmaker’s perspective, makes a critical integration of concepts and techniques from other relevant disciplines to approach the expressive challenges proposed by CVR as an experimental medium
Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video
EXtended Reality systems have resurged as a medium for work and entertainment. While
360o video has been characterized as less immersive than computer-generated VR, its
realism, ease of use and affordability mean it is in widespread commercial use. Based
on the prevalence and potential of the 360o video format, this research is focused on
improving and augmenting the user experience of watching 360o video. By leveraging
knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI),
this research addresses two issues affecting user experience in 360o video: Attention
Guidance and Visually Induced Motion Sickness (VIMS).
This research work relies on the construction of multiple artifacts to answer the de-
fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe-
riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as
well as enabling the collection and analysis of captured metrics emerging from the user
experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including
optical flow and visual SLAM among others) that control parameters for visual modi-
fications such as a restricted Field of View (FoV). These artifacts are accompanied by
evaluation studies targeting the defined research questions. Through Cue Control, this
research shows that non-diegetic music can be spatialized to act as orientation for users.
A partial spatialization of music was deemed ineffective when used for orientation. Addi-
tionally, our results also demonstrate that diegetic sounds are used for notification rather
than orientation. Through VIMS mitigation pipeline, this research shows that dynamic
restricted FoV is statistically significant in mitigating VIMS, while mantaining desired
levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a
Research through Design (RtD) approach, where the IVRUX artifact is the product of de-
sign knowledge and gave direction to research. The research presented in this thesis is
of interest to practitioners and researchers working on 360o video and helps delineate
future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra-
balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo
menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili-
dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na
prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e
aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento
de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta
pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o:
Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS).
Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res-
ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de
experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação
de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas
capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS,
uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que
controla parâmetros para modificações visuais como o campo de visão restringido. Estes
artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas
de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não-
diegética pode ser espacializada para servir como orientação para os utilizadores. Uma
espacialização parcial da música foi considerada ineficaz quando usada para a orientação.
Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para
notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa
mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar
VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para
a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD),
onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa.
A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra-
balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um
espaço de design rico para a interação e narrativa