3,523 research outputs found

    Reception of game subtitles : an empirical study

    Get PDF
    Altres ajuts: European project Hbb4All from the FP7 CIP-ICTPSP.2013.5.1 # 621014.Over the last few years accessibility to the media has been gathering the attention of scholars, particularly subtitling for the deaf and hard of hearing (SDH) and audiodescription (AD) for the blind, due to the transition from analogue to digital TV that took place in Europe in 2012. There is a wide array of academic studies focussing on subtitling and SDH in different media, such as TV, cinema, and DVD. However, despite the fact that many video games contain cinematic scenes, which are subtitled intralingually, interlingually or both, subtitling practices in game localization remain unexplored, and the existing standards widely applied to subtitling for TV, DVD and cinema are not applied. There is a need for standardisation of game subtitling practices, which will ultimately lead to an enhanced gameplay experience for all users. This paper presents a small-scale exploratory study about the reception of subtitles in video games by means of user tests through a questionnaire and eye tracking technology in order to determine what kind of subtitles users prefer, focusing on parameters such as presentation, position, character identification, and depiction of sound effects. The final objective is to contribute to the development of best practices and standards in subtitling for this emerging digital medium, which will enhance game accessibility not only for deaf and hard of hearing players but also for all players

    Read, watch, listen: a commentary on eye tracking and moving images

    Get PDF
    Eye tracking is a research tool that has great potential for advancing our understanding of how we watch movies. Questions such as how differences in the movie influences where we look and how individual differences between viewers alters what we see can be operationalised and empirically tested using a variety of eye tracking measures. This special issue collects together an inspiring interdisciplinary range of opinions on what eye tracking can (and cannot) bring to film and television studies and practice. In this article I will reflect on each of these contributions with specific focus on three aspects: how subtitling and digital effects can reinvigorate visual attention, how audio can guide and alter our visual experience of film, and how methodological, theoretical and statistical considerations are paramount when trying to derive conclusions from eyetracking data

    Telops for language learning: Japanese language learners’ perceptions of authentic Japanese variety shows and implications for their use in the classroom

    Get PDF
    Research on the use of leisure-oriented media products in foreign language learning is not a novelty. Building further on insights into the effects of audiovisual input on learners, recent studies have started to explore online learning behaviour. This research employed an exploratory design to examine the perceptions of a Japanese variety show with intralingual text, known as telops, by Japanese Language Learners (JLLs) and native Japanese speakers through a multimodal transcript, eye-tracking technology, questionnaires, and field notes. Two main objectives underlie this study: (1) to gain insights into participants’ multimodal perceptions and attitudes towards the use of such authentic material for language learning, and (2) to gain a better understanding of the distribution of participants’ visual attention between stimuli. Data from 43 JLLs and five native Japanese speakers were analysed. The JLLs were organised into a pre-exchange, exchange and post-exchange group while the native Japanese speakers functioned as the reference group. A thematic analysis was conducted on the open-ended questionnaire responses and Areas Of Interest (AOIs) were grouped to generate fixation data. The themes suggest that all learner groups feel that telops help them link the stimuli in the television programme although some difficulty was experienced with the amount and pace of telops in the pre-exchange and exchange groups. The eye-tracking results show that faces and telops gather the most visual attention from all participant groups. Less clear-cut trends in visual attention are detected when AOIs on telops are grouped according to the degree in which they resemble the corresponding dialogue. This thesis concludes with suggestions as to how such authentic material can complement Japanese language learning

    Exploring the Educational Potentials of Language Learning with Netflix Tool: An Eye-Tracking Study

    Get PDF
    Digitization has revolutionized the industry of home entertainment services, enhancing the audience’s viewing experience and offering an abundance of choices within a new intercultural and multilingual reality. This pluralistic environment is a yet uncharted terrain of resources that could be exploited while popular culture and school curricula reach a decisive juncture. Evaluating the potentials of using new media learning tools, developed in line with the expansion of Over-the-top media services, has underpinned the objectives of this research. The analysis of eye movement data depicts the viewing patterns on three versions of the same film extract via Netflix streaming service. The first was screened with standard interlingual subtitling and the other two were viewed via Language Learning with Netflix (LLN) platform, a newly launched tool which allows the simultaneous, dual presentation of both the original dialogue and its translation. This paper aims to explore the proliferation of accessible options among different modes of audiovisual language transfer within an online participatory environment. In the emergent new media culture, the educational potentials of bilingual subtitling can challenge well-established borderlines and habit formations of viewership

    FrameNet annotation for multimodal corpora: devising a methodology for the semantic representation of text-image interactions in audiovisual productions

    Get PDF
    Multimodal analyses have been growing in importance within several approaches to Cognitive Linguistics and applied fields such as Natural Language Understanding. Nonetheless fine-grained semantic representations of multimodal objects are still lacking, especially in terms of integrating areas such as Natural Language Processing and Computer Vision, which are key for the implementation of multimodality in Computational Linguistics. In this dissertation, we propose a methodology for extending FrameNet annotation to the multimodal domain, since FrameNet can provide fine-grained semantic representations, particularly with a database enriched by Qualia and other interframal and intraframal relations, as it is the case of FrameNet Brasil. To make FrameNet Brasil able to conduct multimodal analysis, we outlined the hypothesis that similarly to the way in which words in a sentence evoke frames and organize their elements in the syntactic locality accompanying them, visual elements in video shots may, also, evoke frames and organize their elements on the screen or work complementarily with the frame evocation patterns of the sentences narrated simultaneously to their appearance on screen, providing different profiling and perspective options for meaning construction. The corpus annotated for testing the hypothesis is composed of episodes of a Brazilian TV Travel Series critically acclaimed as an exemplar of good practices in audiovisual composition. The TV genre chosen also configures a novel experimental setting for research on integrated image and text comprehension, since, in this corpus, text is not a direct description of the image sequence but correlates with it indirectly in a myriad of ways. The dissertation also reports on an eye-tracker experiment conducted to validate the approach proposed to a text-oriented annotation. The experiment demonstrated that it is not possible to determine that text impacts gaze directly and was taken as a reinforcement to the approach of valorizing modes combination. Last, we present the Frame2 dataset, the product of the annotation task carried out for the corpus following both the methodology and guidelines proposed. The results achieved demonstrate that, at least for this TV genre but possibly also for others, a fine-grained semantic annotation tackling the diverse correlations that take place in a multimodal setting provides new perspective in multimodal comprehension modeling. Moreover, multimodal annotation also enriches the development of FrameNets, to the extent that correlations found between modalities can attest the modeling choices made by those building frame-based resources.Análises multimodais vêm crescendo em importância em várias abordagens da Linguística Cognitiva e em diversas áreas de aplicação, como o da Compreensão de Linguagem Natural. No entanto, há significativa carência de representações semânticas refinadas de objetos multimodais, especialmente em termos de integração de áreas como Processamento de Linguagem Natural e Visão Computacional, que são fundamentais para a implementação de multimodalidade no campo da Linguística Computacional. Nesta tese, propomos uma metodologia para estender o método de anotação da FrameNet ao domínio multimodal, uma vez que a FrameNet pode fornecer representações semânticas refinadas, particularmente com um banco de dados enriquecido por Qualia e outras relações interframe e intraframe, como é o caso do FrameNet Brasil. Para tornar a FrameNet Brasil capaz de realizar análises multimodais, delineamos a hipótese de que, assim como as palavras em uma frase evocam frames e organizam seus elementos na localidade sintática que os acompanha, os elementos visuais nos planos de vídeo também podem evocar frames e organizar seus elementos na tela ou trabalhar de forma complementar aos padrões de evocação de frames das sentenças narradas simultaneamente ao seu aparecimento na tela, proporcionando diferentes perfis e opções de perspectiva para a construção de sentido. O corpus anotado para testar a hipótese é composto por episódios de um programa televisivo de viagens brasileiro aclamado pela crítica como um exemplo de boas práticas em composição audiovisual. O gênero televisivo escolhido também configura um novo conjunto experimental para a pesquisa em imagem integrada e compreensão textual, uma vez que, neste corpus, o texto não é uma descrição direta da sequência de imagens, mas se correlaciona com ela indiretamente em uma miríade de formas diversa. A Tese também relata um experimento de rastreamento ocular realizado para validar a abordagem proposta para uma anotação orientada por texto. O experimento demonstrou que não é possível determinar que o texto impacta diretamente o direcionamento do olhar e foi tomado como um reforço para a abordagem de valorização da combinação de modos. Por fim, apresentamos o conjunto de dados Frame2, produto da tarefa de anotação realizada para o corpus seguindo a metodologia e as diretrizes propostas. Os resultados obtidos demonstram que, pelo menos para esse gênero de TV, mas possivelmente também para outros, uma anotação semântica refinada que aborde as diversas correlações que ocorrem em um ambiente multimodal oferece uma nova perspectiva na modelagem da compreensão multimodal. Além disso, a anotação multimodal também enriquece o desenvolvimento de FrameNets, na medida em que as correlações encontradas entre as modalidades podem atestar as escolhas de modelagem feitas por aqueles que criam recursos baseados em frames.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superio

    How captions help people learn languages: A working-memory, eye-tracking study

    Get PDF
    Captions provide a useful aid to language learners for comprehending videos and learning new vocabulary, aligning with theories of multimedia learning. Multimedia learning predicts that a learner’s working memory (WM) influences the usefulness of captions. In this study, we present two eye-tracking experiments investigating the role of WM in captioned video viewing behavior and comprehension. In Experiment 1, Spanish-as-a-foreign-language learners differed in caption use according to their level of comprehension and to a lesser extent, their WM capacities. WM did not impact comprehension. In Experiment 2, English-as-a-second-language learners differed in comprehension according to their WM capacities. Those with high comprehension and high WM used captions less on a second viewing. These findings highlight the effects of potential individual differences and have implications for the integration of multimedia with captions in instructed language learning. We discuss how captions may help neutralize some of working memory’s limiting effects on learning

    Understanding and stimulating the development of perceptual-motor skills in child bicyclists

    Get PDF

    The Depiction of Status Through Nonverbal Behavior in Mad Men

    Get PDF
    corecore