44 research outputs found
Evaluation of animation and lip-sync of avatars, and user interaction in immersive virtual reality learning environments
Virtual Reality (VR) has been showing potential in
new and diverse areas, notably in education. However, there is a
lack of studies in the Foreign Language Teaching and Learning
field, particularly in listening comprehension. Therefore, this
study investigated the effects of avatar animations and lip
synchronization, and user interaction; features deemed relevant
in this broader area. A sociodemographic, a quick CEFR -
Common European Framework of Reference for Languages -
15-minute English test, and questionnaire were used to evaluate
the participants’ Presence, Quality of Experience, Cybersickness
and Knowledge Retention. Results show that, overall, the use of
avatars with realistic animations and movements, and featuring
lip synchronization have a positive influence on the users’ sense
of presence, knowledge retention and a more enjoyable overall
quality of experience. The same can be said for the use of
object interaction and navigation in the cultural representative
environment, which had an overall positive impact.This work is co-financed by the ERDF – European Regional
Development Fund through the Operational Programme for
Competitiveness and Internationalisation - COMPETE 2020
under the PORTUGAL 2020 Partnership Agreement, and
through the Portuguese National Innovation Agency (ANI) as a
part of project “SMARTCUT - Diagnóstico e Manutenção Remota
e Simuladores para Formação de operação e manutenção
de Máquinas Florestais: POCI-01-0247-FEDER-048183”.info:eu-repo/semantics/publishedVersio
Adaptation and validation of the ITC - Sense of Presence Inventory for the Portuguese language
This investigation concerns the translation and validation of the ITC - Sense of Presence Inventory (ITC-SOPI) for
the Portuguese-speaking population (in Europe), estimating the validity of the content and concepts and the
maintenance of an equivalent semantics. It also sought to verify its psychometric properties, namely its factorial
validity and internal consistency. The sample consisted of 459 individuals, 274 males and 185 females. The
fidelity of the subscales varied between 0.67 and 0.89. Confirmatory factor analysis revealed a theoretical model
of 35 items, divided by four factors. After fixing some of the residual errors between items, the following adjustment
indexes were calculated: χ2/df = 2.301; goodness fit index = 0.860; comparative fitness
index=0.889; root mean square error of approximation=0.053; Akaike’s information criterion=1420. Based
on the observed results and the robustness of the sample size used, the obtained theoretical model shows that the
ITC-SOPI is recommended to measure presence in virtual reality research projects with samples of Portuguese
language speakers.This work is financed by the ERDF European Regional Development
Fund through the Operational Programme for Competitiveness and
Internationalisation - COMPETE 2020 Programme and by National
Funds through the Portuguese funding agency, FCT - Fundação para a
CiĂŞncia e a Tecnologia within project POCI-01-0145-FEDER-028618
entitled PERFECT - Perceptual Equivalence in virtual Reality For
authEntiC Training. All the works were conducted at INESC TEC’s
MASSIVE Virtual Reality Laboratory.info:eu-repo/semantics/publishedVersio
Correction to: Collaborative immersive authoring tool for real-time creation of multisensory VR experiences
In the original publication, Figs. 1 and 2 were interchange and the citation of Fig. 1 in the
third paragraph of section 2.2 Authoring tools for multisensory VR experiences
should be removed.
The citation of Fig. 2 in section 3.1 System architecture should be changed to Fig. 1 and
the citation of Fig. 1 in the same section should be change to Fig. 2. Also, the acknowledgement
is missing in the original publication.
The corrected figures and acknowledgement are presented in this erratum.This work was also partially supported by the project “DOUROTUR, Turismo e Inovação
Tecnológica no Douro/NORTE-01-0145-FEDER-000014” financed by the North Portugal Regional Operational
Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European
Regional Development Fund (ERDF). All the works were conducted at INESC TEC’s MASSIVE VR
Laboratory.info:eu-repo/semantics/publishedVersio
Multisensory Augmented Reality in Cultural Heritage: Impact of Different Stimuli on Presence, Enjoyment, Knowledge and Value of the Experience
Little is known about the impact of the addition of each stimulus in multisensory augmented
reality experiences in cultural heritage contexts. This paper investigates the impact of different sensory
conditions on a user’s sense of presence, enjoyment, knowledge about the cultural site, and value of the
experience. Five different multisensory conditions, namely, Visual, Visual + Audio, Visual + Smell, and
Visual + Audio + Smell conditions, and regular visit referred to as None condition, were evaluated by a
total of 60 random visitors distributed across the specified conditions. According to the results, the addition
of particular types of stimuli created a different impact on the sense of presence subscale scores, namely,
on spatial presence, involvement, and experienced realism, but did not influence the overall presence score.
Overall, the results revealed that the addition of stimuli improved enjoyment and knowledge scores and did
not affect the value of the experience scores. We concluded that each stimulus has a differential impact on
the studied variables, demonstrating that its usage should depend on the goal of the experience: smell should
be used to privilege realism and spatial presence, while audio should be adopted when the goal is to elicit
involvement.info:eu-repo/semantics/publishedVersio
Collaborative immersive authoring tool for real-time creation of multisensory VR experiences
With the appearance of innovative virtual reality (VR) technologies, the need to create
immersive content arose. Although there are already some non-immersive solutions to address
immersive audio-visual content, there are no solutions that allow the creation of immersive
multisensory content. This work proposes a novel architecture for a collaborative immersive
tool that allows the creation of multisensory VR experiences in real-time, thus promoting the
expeditious development, adoption, and use of immersive systems and enabling the building
of custom-solutions that can be used in an intuitive manner to support organizations’ business
initiatives. To validate the presented proposal, two approaches for the authoring tools (Desktop
interface and Immersive interface) were subjected to a set of tests and evaluations consisting of
a usability study that demonstrated not only the participants’ acceptance of the authoring tool
but also the importance of using immersive interfaces for the creation of such VR experiences.info:eu-repo/semantics/publishedVersio
Assessing presence in virtual environments: adaptation of the psychometric properties of the presence questionnaire to the portuguese populations
Virtual Reality applications have the goal of transporting their users to a given virtual environment
(VE). Thus, Presence is a consensual metric for evaluating the VEs’ effectiveness. The present study
adapts the Presence Questionnaire (PQ) for the Portuguese-speaking population, maintaining the
validity of the contents and concepts, to ascertain the psychometric properties of the
instrument.The adaptation to Portuguese was achieved through the standard adaptation process
of translation and back-translation process. The sample consisted of 451 individuals (268 males
and 183 females). Factor reliability ranged from 0.63 to 0.86. Confirmatory factor analysis
produced a theoretical model of 21 items distributed among seven factors, where the covariance
between some residual item errors was established. The fit indices obtained were x2/df = 2.077,
GFI = 0.936, CFI = 0.937, RMSEA = 0.049, P [RMSEA ≤ 0.05], MECVI = 1.070. Results obtained
allowed us to consider that the adapted Portuguese version of the PQ, with 21 items, forms a
robust and valid questionnaire whose use is recommended to evaluate Presence in virtual reality
research programmes, provided that they use samples of the Portuguese language (Europe).This work is financed by the ERDF – European Regional
Development Fund through the Operational Programme for
Competitiveness and Internationalisation – COMPETE 2020
Programme and by National Funds through the Portuguese
funding agency, FCT – Fundação para a Ciência e a Tecnologia
within project POCI-01-0145-FEDER-028618 entitled PERFECT
– Perceptual Equivalence in virtual Reality For auth-
EntiC Training.info:eu-repo/semantics/publishedVersio
Backward compatible object detection using HDR image content
Convolution Neural Network (CNN)-based object detection models have achieved unprecedented accuracy in challenging detection tasks. However, existing detection models (detection heads) trained on 8-bits/pixel/channel low dynamic range (LDR) images are unable to detect relevant objects under lighting conditions where a portion of the image is either under-exposed or over-exposed. Although this issue can be addressed by introducing High Dynamic Range (HDR) content and training existing detection heads on HDR content, there are several major challenges, such as the lack of real-life annotated HDR dataset(s) and extensive computational resources required for training and the hyper-parameter search. In this paper, we introduce an alternative backwards-compatible methodology to detect objects in challenging lighting conditions using existing CNN-based detection heads. This approach facilitates the use of HDR imaging without the immediate need for creating annotated HDR datasets and the associated expensive retraining procedure. The proposed approach uses HDR imaging to capture relevant details in high contrast scenarios. Subsequently, the scene dynamic range and wider colour gamut are compressed using HDR to LDR mapping techniques such that the salient highlight, shadow, and chroma details are preserved. The mapped LDR image can then be used by existing pre-trained models to extract relevant features required to detect objects in both the under-exposed and over-exposed regions of a scene. In addition, we also conduct an evaluation to study the feasibility of using existing HDR to LDR mapping techniques with existing detection heads trained on standard detection datasets such as PASCAL VOC and MSCOCO. Results show that the images obtained from the mapping techniques are suitable for object detection, and some of them can significantly outperform traditional LDR images
Web Accessibility and Digital Businesses: The Potential Economic Value of Portuguese People with Disability
AbstractThe lack of data in Portugal is a crucial problem for a full characterization and thus a full digital integration for people with disabilities. This is not only a problem of ethic dimension or equal opportunities but have also an economic dimension because excludes a consumer group with economical potential. Hence, this article focuses on the importance of the characterization of people with disabilities in a social, economic and digital perspective. It aims to emphasize their disabilities and ageing evolution, potential value in the digital business and design awareness for inclusion for improve the quality of life of people with disabilities
Uniform Color Space-Based High Dynamic Range Video Compression
© 1991-2012 IEEE. Recently, there has been a significant progress in the research and development of the high dynamic range (HDR) video technology and the state-of-the-art video pipelines are able to offer a higher bit depth support to capture, store, encode, and display HDR video content. In this paper, we introduce a novel HDR video compression algorithm, which uses a perceptually uniform color opponent space, a novel perceptual transfer function to encode the dynamic range of the scene, and a novel error minimization scheme for accurate chroma reproduction. The proposed algorithm was objectively and subjectively evaluated against four state-of-the-art algorithms. The objective evaluation was conducted across a set of 39 HDR video sequences, using the latest x265 10-bit video codec along with several perceptual and structural quality assessment metrics at 11 different quality levels. Furthermore, a rating-based subjective evaluation ( ) was conducted with six sequences at two different output bitrates. Results suggest that the proposed algorithm exhibits the lowest coding error amongst the five algorithms evaluated. Additionally, the rate-distortion characteristics suggest that the proposed algorithm outperforms the existing state-of-the-art at bitrates ≥ 0.4 bits/pixel
Tone mapping HDR panoramas for viewing in Head Mounted Displays
Head-mounted displays enable a user to view a complete environment as if he/she was there; providing an immersive experience. However, the lighting in a full environment can vary significantly. Panoramic images captured with conventional, Low Dynamic Range (LDR), imaging of scenes with a large range of lighting conditions, can include areas of under- or over-exposed pixels. High Dynamic Range (HDR) imaging, on the other hand, is able to capture the full range of detail in a scene. However, HMDs are not currently HDR and thus the HDR panorama needs to be tone mapped before it can be displayed on the LDR HMD. While a large number of tone mapping operators have been proposed in the last 25 years, these were not designed for panoramic images, or for use with HMDs. This paper undertakes a two part subjective study to investigate which of the current, state-of-the-art tone mappers is most suitable for use with HMD