2,549 research outputs found

    A review of technology-enhanced Chinese character teaching and learning in a digital context

    Get PDF
    The acquisition of Chinese characters has been widely acknowledged as challenging for learners of Chinese as a foreign language (CFL) due to their unique logographic nature and the time and effort involved. However, recent advancements in instructional technologies demonstrate a promising role in facilitating the teaching and learning of Chinese characters. This paper examines studies exploring technology-enhanced character teaching and learning (TECTL) through a systematic literature review of relevant publications produced between 2010 and 2021. The synthesized findings shed insights on the research undertaken in the TECTL field, identifying a focus on characters’ component disassembling, re-assembling, and associations among orthography, semantics, and phonology. In addition, learners’ perceptions toward the use of technology and the benefits of various types of technological tools are also discussed in detail. Implications for TECTL are also put forward for future pedagogical practice and exploration

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden QualitĂ€t maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primĂ€r Text zu generieren, mĂŒssen Übersetzer nun Fehler in ansonsten hilfreichen ÜbersetzungsvorschlĂ€gen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mĂŒhsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere InteraktionsmodalitĂ€ten als Maus und Tastatur das PE unterstĂŒtzen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder ĂŒber Handgesten zu verĂ€ndern, Wörter per Drag & Drop neu anzuordnen oder all diese EingabemodalitĂ€ten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschĂ€tzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen fĂŒr das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine DomĂ€ne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    Description and application of the correlation between gaze and hand for the different hand events occurring during interaction with tablets

    Get PDF
    People’s activities naturally involve the coordination of gaze and hand. Research in Human-Computer Interaction (HCI) endeavours to enable users to exploit this multimodality for enhanced interaction. With the abundance of touch screen devices, direct manipulation of an interface has become a dominating interaction technique. Although touch enabled devices are prolific in both public and private spaces, interactions with these devices do not fully utilise the benefits from the correlation between gaze and hand. Touch enabled devices do not employ the richness of the continuous manual activity above their display surface for interaction and a lot of information expressed by users through their hand movements is ignored. This thesis aims at investigating the correlation between gaze and hand during natural interaction with touch enabled devices to address these issues. To do so, we set three objectives. Firstly, we seek to describe the correlation between gaze and hand in order to understand how they operate together: what is the spatial and temporal relationship between these modalities when users interact with touch enabled devices? Secondly, we want to know the role of some of the inherent factors brought by the interaction with touch enabled devices on the correlation between gaze and hand, because identifying what modulates the correlation is crucial to design more efficient applications: what are the impacts of the individual differences, the task characteristics and the features of the on-screen targets? Thirdly, as we want to see whether additional information related to the user can be extracted from the correlation between gaze and hand, we investigate the latter for the detection of users’ cognitive state while they interact with touch enabled devices: can the correlation reveal the users’ hesitation? To meet the objectives, we devised two data collections for gaze and hand. In the first data collection, we cover the manual interaction on-screen. In the second data collection, we focus instead on the manual interaction in-the-air. We dissect the correlation between gaze and hand using three common hand events users perform while interacting with touch enabled devices. These events comprise taps, stationary hand events and the motion between taps and stationary hand events. We use a tablet as a touch enabled device because of its medium size and the ease to integrate both eye and hand tracking sensors. We study the correlation between gaze and hand for tap events by collecting gaze estimation data and taps on tablet in the context of Internet related tasks, representative of typical activities executed using tablets. The correlation is described in the spatial and temporal dimensions. Individual differences and effects of the task nature and target type are also investigated. To study the correlation between gaze and hand when the hand is in a stationary situation, we conducted a data collection in the context of a Memory Game, chosen to generate enough cognitive load during playing while requiring the hand to leave the tablet’s surface. We introduce and evaluate three detection algorithms, inspired by eye tracking, based on the analogy between gaze and hand patterns. Afterwards, spatial comparisons between gaze and hands are analysed to describe the correlation. We study the effects on the task difficulty and how the hesitation of the participants influences the correlation. Since there is no certain way of knowing when a participant hesitates, we approximate the hesitation with the failure of matching a pair of already seen tiles. We study the correlation between gaze and hand during hand motion between taps and stationary hand events from the same data collection context than the case mentioned above. We first align gaze and hand data in time and report the correlation coefficients in both X and Y axis. After considering the general case, we examine the impact of the different factors implicated in the context: participants, task difficulty, duration and type of the hand motion. Our results show that the correlation between gaze and hand, throughout the interaction, is stronger in the horizontal dimension of the tablet rather than in its vertical dimension, and that it varies widely across users, especially spatially. We also confirm the eyes lead the hand for target acquisition. Moreover, we find out that the correlation between gaze and hand when the hand is in the air above the tablet’s surface depends on where the users look at on the tablet. As well, we show that the correlation during eye and hand during stationary hand events can indicate the users’ indecision, and that while the hand is moving, the correlation depends on different factors, such as the degree of difficulty of the task performed on the tablet and the nature of the event before/after the motion

    Spartan Daily, May 11, 1979

    Get PDF
    Volume 72, Issue 65https://scholarworks.sjsu.edu/spartandaily/6493/thumbnail.jp

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Interactive neural machine translation

    Full text link
    This is the author’s version of a work that was accepted for publication in Computer Speech & Language. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Speech & Language 00 (2016) 1 20. DOI 10.1016/j.csl.2016.12.003.Despite the promising results achieved in last years by statistical machine translation, and more precisely, by the neural machine translation systems, this technology is still not error-free. The outputs of a machine translation system must be corrected by a human agent in a post-editing phase. Interactive protocols foster a human computer collaboration, in order to increase productivity. In this work, we integrate the neural machine translation into the interactive machine translation framework. Moreover, we propose new interactivity protocols, in order to provide the user an enhanced experience and a higher productivity. Results obtained over a simulated benchmark show that interactive neural systems can significantly improve the classical phrase-based approach in an interactive-predictive machine translation scenario. c 2016 Elsevier Ltd. All rights reserved.The authors wish to thank the anonymous reviewers for their careful reading and in-depth criticisms and suggestions. This work was partially funded by the project ALMAMATER (PrometeoII/2014/030). We also acknowledge NVIDIA for the donation of the GPU used in this work.Peris Abril, Á.; Domingo-Ballester, M.; Casacuberta Nolla, F. (2017). Interactive neural machine translation. Computer Speech and Language. 1-20. https://doi.org/10.1016/j.csl.2016.12.003S12

    Relationship-building through embodied feedback: Teacher-student alignment in writing conferences

    Get PDF
    Over the last two decades, an impressive amount of work has been done on the interaction that takes place during writing conferences (Ewert, 2009). However, most previous studies focused on the instructional aspects of conference discourse, without considering its affective components. Yet conferences are by no means emotionally neutral (Witt & Kerssen-Griep, 2011), as they involve evaluation of student work, correction, directions for improvement, and even criticism—that is, they involve potentially face-threatening acts. Therefore, it is important for teachers to know how to conference with students in non-threatening and affiliative ways. The present study examines 1) the interactional resources, including talk and embodied action (e.g., gaze, facial expression, gesture, body position) that one experienced writing instructor used in writing conferences to respond to student writers and their writings in affiliative ways, and 2) the interactional resources that the teacher used to repair disaffiliative actions—either her own or those of the students—in conference interaction. The data for the study are comprised of 14 video recordings of conference interaction between one instructor and two students collected over a 16-week semester in an introductory composition course for international students at a large U.S. university. Data were analyzed using methods from conversation analysis (Jefferson, 1988; Sacks, Schegloff, & Jefferson, 1974; Schegloff, 2007; Schegloff & Sacks, 1973) and multimodal interaction analysis (Nishino & Atkinson, 2015; Norris, 2004, 2013). The conceptual framework adopted in this study is based on the notions of embodied interaction (Streeck, Goodwin, & LeBaron, 2011a, 2011b), embodied participation frameworks (Goodwin, 2000a), and alignment (Atkinson, Churchill, Nishino, & Okada, 2007). Findings indicate that the instructor was responsive to the potentialities of face-threatening acts during conference interaction, and she effectively employed various interactional resources not only in responding to student writing in affiliative and non-threatening ways, but also in repairing the disruption in alignment caused by disaffiliative actions of either of the participants. This study demonstrates the value of teachers’ embodied actions not only as tools that facilitate instruction but also as resources that can be used to keep a positive atmosphere in writing conferences. The findings contribute to the existing body of research on writing conferences, feedback, embodied practices in teacher-student interaction, and teacher-student relationships and rapport. The study also has implications for general classroom pedagogy, second language teaching, and second language writing instruction
    • 

    corecore