13,431 research outputs found
SUBTITLING FILM “KHATAMUL MURSALIN WA MUWAJAHUHU ASYADAID” DARI BAHASA ARAB KE BAHASA INDONESIA
This portfolio describes the story and explains the translation of the subtitles in the technical animated film "Khatamul Mursalin wa Mumukau Asyadaid" which was uploaded by the ATA Media channel in 2023 from Arabic to Indonesian. The problem solutions offered aim to help translators understand the process and how to create subtitles for a film. The method for implementing film subtitles is carried out in three stages: Film research, preparation, and making subtitles. The conclusion from this portfolio is that making film subtitles starts with searching for films in the language Arabic that have never been translated, then the next process is making a video transcript which will be subtitled. In editing and translating subtitles, the author explains how to edit and translate subtitles from how to install Subtitle Edit software, how to insert videos into Subtitle Edit, create text boxes, and translation techniques used in making subtitles such as pure borrowing techniques, adaptation techniques, techniques modules, amplification techniques, and transposition techniques. Then the final stage of making subtitles is evaluating the results of the subtitle translation
Subtitling Film Animasi Saud Wa Sarah Fii Raudhatil Qur’an Episode 1-6
Animation is one of the means in the world of education, especially those that use audio-visual methods. Subtitling is intended to move information from the source language to the target language. The Saud Wa Sarah Fii Raudhatil Qur'an series was released by the YouTube channel Saudwesara from the Kingdom of Saudi Arabia. The language used is fusha Arabic. This series contains the daily story of Saud and his sister Sarah which contains educational content for children's characters and also contains Islamic content. The method used in translating this series combines semantic translation methods and free translation. The translation of this series uses Subtitle Edit software. The purpose of translating the series into Indonesian is so that it can be enjoyed and watched using the target language. The results of the translation can be used to enjoy the series using Indonesian
Icanlearn: A Mobile Application For Creating Flashcards And Social Stories\u3csup\u3etm\u3c/sup\u3e For Children With Autistm
The number of children being diagnosed with Autism Spectrum Disorder (ASD) is on the rise, presenting new challenges for their parents and teachers to overcome. At the same time, mobile computing has been seeping its way into every aspect of our lives in the form of smartphones and tablet computers. It seems only natural to harness the unique medium these devices provide and use it in treatment and intervention for children with autism.
This thesis discusses and evaluates iCanLearn, an iOS flashcard app with enough versatility to construct Social StoriesTM. iCanLearn provides an engaging, individualized learning experience to children with autism on a single device, but the most powerful way to use iCanLearn is by connecting two or more devices together in a teacher-learner relationship. The evaluation results are presented at the end of the thesis
The design and implementation of an infrastructure for multimedia digital libraries
We develop an infrastructure for managing, indexing and serving multimedia content in digital libraries. This infrastructure follows the model of the Web, and thereby is distributed in nature. We discuss the design of the Librarian, the component that manages meta data about the content. The management of meta data has been separated from the media servers that manage the content itself. Also, the extraction of the meta data is largely independent of the Librarian. We introduce our extensible data model and the daemon paradigm that are the core pieces of this architecture. We evaluate our initial implementation using a relational database. We conclude with a discussion of the lessons we learned in building this system, and proposals for improving the flexibility, reliability, and performance of the syste
Impact of automatic segmentation on the quality, productivity and self-reported post-editing effort of intralingual subtitles
This paper describes the evaluation methodology followed to measure the impact of using a machine learning algorithm to automatically segment intralingual subtitles. The segmentation quality, productivity and self-reported post-editing effort achieved with such approach are shown to improve those obtained by the technique based in counting characters, mainly employed for automatic subtitle segmentation currently. The corpus used to train and test the proposed automated segmentation method is also described and shared with the community, in order to foster further research in this are
Adapting End-to-End Speech Recognition for Readable Subtitles
Automatic speech recognition (ASR) systems are primarily evaluated on
transcription accuracy. However, in some use cases such as subtitling, verbatim
transcription would reduce output readability given limited screen size and
reading time. Therefore, this work focuses on ASR with output compression, a
task challenging for supervised approaches due to the scarcity of training
data. We first investigate a cascaded system, where an unsupervised compression
model is used to post-edit the transcribed speech. We then compare several
methods of end-to-end speech recognition under output length constraints. The
experiments show that with limited data far less than needed for training a
model from scratch, we can adapt a Transformer-based ASR model to incorporate
both transcription and compression capabilities. Furthermore, the best
performance in terms of WER and ROUGE scores is achieved by explicitly modeling
the length constraints within the end-to-end ASR system.Comment: IWSLT 202
Concurrent collaborative captioning
Captioned text transcriptions of the spoken word can benefit hearing impaired people, non native speakers, anyone if no audio is available (e.g. watching TV at an airport) and also anyone who needs to review recordings of what has been said (e.g. at lectures, presentations, meetings etc.) In this paper, a tool is described that facilitates concurrent collaborative captioning by correction of speech recognition errors to provide a sustainable method of making videos accessible to people who find it difficult to understand speech through hearing alone. The tool stores all the edits of all the users and uses a matching algorithm to compare users’ edits to check if they are in agreement
- …
