2,018 research outputs found

    (Sub)titles in cinematic virtual reality : a descriptive study

    Get PDF
    Virtual reality has attracted the attention of industry and researchers. Its applications for entertainment and audiovisual content creation are endless. Filmmakers are experimenting with different techniques to create immersive stories. Also, subtitle creators and researchers are finding new ways to implement (sub)titles in this new medium. In this article, the state-of-the-art of cinematic virtual reality content is presented and the current challenges faced by filmmakers when dealing with this medium and the impact of immersive content on subtitling practices are discussed. Moreover, the different studies on subtitles in 360º videos carried out so far and the obtained results are reviewed. Finally, the results of a corpus analysis are presented in order to illustrate the current subtitle practices by The New York Times and the BBC. The results have shed some light on issues such as position, innovative graphic strategies or the different functions, challenging current subtitling standard practices in 2D conten

    Disruptive approaches for subtitling in immersive environments

    Get PDF
    The Immersive Accessibility Project (ImAc) explores how accessibility services can be integrated with 360o video as well as new methods for enabling universal access to immersive content. ImAc is focused on inclusivity and addresses the needs of all users, including those with sensory or learning disabilities, of all ages and considers language and user preferences. The project focuses on moving away from the constraints of existing technologies and explores new methods for creating a personal experience for each consumer. It is not good enough to simply retrofit subtitles into immersive content: this paper attempts to disrupt the industry with new and often controversial methods. This paper provides an overview of the ImAc project and proposes guiding methods for subtitling in immersive environments. We discuss the current state-of-the-art for subtitling in immersive environments and the rendering of subtitles in the user interface within the ImAc project. We then discuss new experimental rendering modes that have been implemented including a responsive subtitle approach, which dynamically re-blocks subtitles to fit the available space and explore alternative rendering techniques where the subtitles are attached to the scene

    Subtitles in virtual reality : guidelines for the integration of subtitles in 360º content

    Get PDF
    Immersive content has become a popular medium for storytelling. This type of content is typically accessed via a head-mounted visual display within which the viewer is located at the center of the action with the freedom to look around and explore the scene. The criteria for subtitle position for immersive media still need to be defined. Guiding mechanisms are necessary for circumstances in which the speakers are not visible and viewers, lacking an audio cue, require visual information to guide them through the virtual scene. The aim of this reception study is to compare different subtitling strategies: always-visible position to fixed-position and arrows to radar. To do this, feedback on preferences, immersion (using the ipq questionnaire) and head movements was gathered from 40 participants (20 hearing and 20 hard of hearing). Results show that always-visible subtitles with arrows are the preferred option. Always-visible and arrows achieved higher scores in the ipq questionnaire than fixed-position and radar. Head-movement patterns show that participants move more freely when the subtitles are always-visible than when they are in a fixed position, meaning that with always-visible subtitles the experience is more realistic, because the viewers do not feel constrained by the implementation of subtitlesLos contenidos inmersivos se han convertido en un medio interesante para contar historias. A este tipo de contenido normalmente se accede con unas gafas de realidad virtual, a través de las cuales el usuario, que se encuentra en el centro de la acción, tiene la libertad de mirar y explorar la escena libremente. Los criterios para determinar la posición de los subtítulos en contenidos inmersivos todavía no se han definido. También se necesitan mecanismos de guía para indicar dónde está la persona que habla cuando está fuera del campo de visión y los usuarios no pueden contar con el audio para guiarse. El objetivo de este estudio de recepción es comparar diferentes estrategias de subtitulación: subtítulos siempre visibles con subtítulos colocados en posiciones fijas, y flechas con radar. Para ello, se han recopilado los comentarios de 40 participantes (20 oyentes y 20 con pérdida auditiva) sobre las preferencias, la inmersión (con el cuestionario ipq) y los patrones de movimiento de cabeza. Los resultados muestran que los subtítulos siempre visibles con flechas son la opción preferida. Además, estas soluciones obtuvieron mejores puntuaciones en el cuestionario ipq. Los patrones de movimientos de cabeza muestran que los participantes se movían más libremente con los subtítulos siempre visibles, lo que supone que con este tipo de subtítulos la experiencia es más realista ya que los usuarios no se sienten cohibidos por la integración de los subtítulosLes contenus immersifs sont devenus un moyen intéressant de raconter des histoires. Ce type de contenu est généralement accessible avec des lunettes de réalité virtuelle, avec lesquelles l'utilisateur est au coeur de l'action et à la liberté de regarder et d'explorer la scène librement. Les critères pour définir la position des sous-titres dans ce contenu n'a pas encore été définie. Des mécanismes de guidage sont également nécessaires pour indiquer où se trouve le locuteur lorsqu'il est hors du champ de vision et que les utilisateurs ne peuvent pas se fier à l'audio pour être guidés. L'objectif de cette étude de réception est de comparer différentes stratégies de sous-titrage : des sous-titres toujours visibles avec des sous-titres placés à des positions fixes et des flèches avec le radar. À cet égard, les commentaires de 40 participants (20 auditeurs et 20 malentendants) ont été recueillis sur les préférences, l'immersion (avec le questionnaire ipq) et les schémas de mouvements de la tête. Les résultats montrent que les sous-titres qui sont toujours visibles avec des flèches sont l'option préférée. En outre, ces solutions ont obtenu de meilleurs résultats dans le questionnaire ipq. Les modèles de mouvements de la tête montrent que les participants se sont déplacés plus librement avec les sous-titres toujours visibles, ce qui signifie qu'avec ce type de sous-titres, l'expérience est plus réaliste car les utilisateurs ne sont pas gênés par l'intégration des sous-titre

    SDH in immersive environments : can subtitles be immersive at all?

    Get PDF
    Immersive environments have been emerging for the past few years and their potential for transforming how entertainment is consumed has raised the interest of the industry and the audience. Many powerful technology companies like Microsoft, Sony, Facebook or HTC are investing in devices such as Microsoft Hololens, PlayStation VR, Oculus Rift or HTC Vive. Also, in recent years we have witnessed an explosion of 360º content (videos, movies, documentaries, news, etc.) that can be easily accessed through basically any smartphone. However, immersive contents, such as 360º videos or virtual reality video games, pose a challenge for Audiovisual Translation, and even a harder one for Media Accessibility. How can we make immersive contents accessible for everyone and, specifically, for persons with hearing loss? Some filmmakers say that audiences will need to learn a new visual grammar or language to understand how stories in immersive environments are built, as it happened when cuts were introduced in film editing. The same will happen with subtitling in immersive contents. We will need to relearn how to read subtitles, since new dimensions are brought by immersive media, such as directions or three-dimensional space. These new dimensions and the freedom of movement attached to virtual worlds present challenges for users who cannot make use of the audio. If some users are lacking the audio cue, how will they know if someone is speaking at their back or if a sound used as an action trigger is sounding in a different location so that they know they have to turn their head? Can subtitles be used to draw attention to the focus of action? In this presentation, we will review the nature of immersive contents and explain the challenges of implementing subtitles for the deaf and hard-of-hearing (SDH) in 360º videos. Basic subtitling topics such as placement of the subtitles or new features such as speaker identification systems are tricky questions. Some researchers have already raised this issue (Agulló 2018, Brown et al 2018, Fraile et al 2018, Montagud et al 2018, Rothe et al 2018). Regarding placement, different solutions have been suggested, but mainly they can be summed up with two key concepts: dynamic subtitles and static subtitles (or "evenly spaced subtitles" and "following head immediately" subtitles, depending on the terminology used). Basically, dynamic subtitles are those burnt in the video in one or different specific positions. Static subtitles, however, are those linked to the audience field of view and move with the viewers "following their heads immediately" (Brown et al 2018) everywhere they go. The latter implementation is more similar to the integration of traditional subtitles in 2D environments, while the former is more similar to innovative subtitling practices such as creative (Foerster 2010, McClarty 2012, 2014) or integrated subtitles (Fox 2016a, 2016b). As for speaker identification systems, this is a completely new feature for SDH in immersive media. A mechanism needs to be designed to indicate the audience where the speakers are located in the 360º environment, so that they don't miss the action. Very few researchers have tackled this topic (Rothe et al 2017) and the discussion about which methods are better is still open and will be presented during the session. Also, to illustrate the discussion, we will analyse some examples obtained from a corpus of selected 360º videos. Finally, we will propose a set of features that are worth researching to achieve immersive and integrated subtitles in this new medium, based on criteria of accessibility, immersion and usabilit

    Making interaction with virtual reality accessible : rendering and guiding methods for subtitles

    Get PDF
    Accessibility in immersive media is a relevant research topic, still in its infancy. This article explores the appropriateness of two rendering modes (fixed-positioned and always-visible) and two guiding methods (arrows and auto-positioning) for subtitles in 360º video. All considered conditions have been implemented and integrated in an end-to-end platform (from production to consumption) for their validation and evaluation. A pilot study with end-users has been prepared and conducted with the goals of determining the preferred options by users, the options that results in a higher presence, and of gathering extra valuable feedback from end-users. The obtained results reflect that, for the considered 360º content types, always-visible subtitles are more preferred by viewers and received better results in the presence questionnaire than the fixed-positioned subtitles. Regarding guiding methods, participants preferred arrows over auto-positioning, because arrows were considered more intuitive and easy to follow and reported better results in the presence questionnair

    Reception of game subtitles : an empirical study

    Get PDF
    Altres ajuts: European project Hbb4All from the FP7 CIP-ICTPSP.2013.5.1 # 621014.Over the last few years accessibility to the media has been gathering the attention of scholars, particularly subtitling for the deaf and hard of hearing (SDH) and audiodescription (AD) for the blind, due to the transition from analogue to digital TV that took place in Europe in 2012. There is a wide array of academic studies focussing on subtitling and SDH in different media, such as TV, cinema, and DVD. However, despite the fact that many video games contain cinematic scenes, which are subtitled intralingually, interlingually or both, subtitling practices in game localization remain unexplored, and the existing standards widely applied to subtitling for TV, DVD and cinema are not applied. There is a need for standardisation of game subtitling practices, which will ultimately lead to an enhanced gameplay experience for all users. This paper presents a small-scale exploratory study about the reception of subtitles in video games by means of user tests through a questionnaire and eye tracking technology in order to determine what kind of subtitles users prefer, focusing on parameters such as presentation, position, character identification, and depiction of sound effects. The final objective is to contribute to the development of best practices and standards in subtitling for this emerging digital medium, which will enhance game accessibility not only for deaf and hard of hearing players but also for all players

    Live Captions in Virtual Reality (VR)

    Full text link
    Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (headlocked, lag, and appear) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participant preferences were split, but the majority of participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found headlocked and lag captions more user-friendly than appear captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best

    Retelling narrative in 360º videos : Implications for audio description

    Get PDF
    The aim of this article is to question whether the approach for producing audio description (AD) in 2D films needs to be revisited for 360  narrative videos, a new media format characterized by its immersive capacity. To provide answers, a two-step research methodology was designed. First, an extensive literature review was performed. The data obtained during the first step was then used to design and carry out focus groups. The first part of the article discusses the findings from the literature review, comparing standard narratives with 360  narrative videos. It draws some conclusions for audio describers in relation to AD content selection, a key task in the translation of visuals into words. In the second part of the article, data obtained from the focus groups held with describers and AD users is presented. The results suggest possible approaches to AD for 360  content, such as the use of spatial sound and elements of interaction

    Interactive Fiction in Cinematic Virtual Reality: Epistemology, Creation and Evaluation

    Get PDF
    This dissertation presents the Interactive Fiction in Cinematic Virtual Reality (IFcVR), an interactive digital narrative (IDN) that brings together the cinematic virtual reality (cVR) and the creation of virtual environments through 360\ub0 video within an interactive fiction (IF) structure. This work is structured in three components: an epistemological approach to this kind of narrative and media hybrid; the creation process of IFcVR, from development to postproduction; and user evaluation of IFcVR. In order to set the foundations for the creation of interactive VR fiction films, I dissect the IFcVR by investigating the aesthetics, narratological and interactive notions that converge and diverge in it, proposing a medium-conscious narratology for this kind of artefact. This analysis led to the production of an IFcVR functional prototype: \u201cZENA\u201d, the first interactive VR film shot in Genoa. ZENA\u2019s creation process is reported proposing some guidelines for interactive and immersive film-makers. In order to evaluate the effectiveness of the IFcVR as an entertaining narrative form and a vehicle for diverse types of messages, this study also proposes a methodology to measure User Experience (UX) on IFcVR. The full evaluation protocol gathers both qualitative and quantitative data through ad hoc instruments. The proposed protocol is illustrated through its pilot application on ZENA. Findings show interactors' positive acceptance of IFcVR as an entertaining experience
    corecore