3,599 research outputs found

    Contex-aware gestures for mixed-initiative text editings UIs

    Full text link
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in Interacting with computers following peer review. The version of record is available online at: http://dx.doi.org/10.1093/iwc/iwu019[EN] This work is focused on enhancing highly interactive text-editing applications with gestures. Concretely, we study Computer Assisted Transcription of Text Images (CATTI), a handwriting transcription system that follows a corrective feedback paradigm, where both the user and the system collaborate efficiently to produce a high-quality text transcription. CATTI-like applications demand fast and accurate gesture recognition, for which we observed that current gesture recognizers are not adequate enough. In response to this need we developed MinGestures, a parametric context-aware gesture recognizer. Our contributions include a number of stroke features for disambiguating copy-mark gestures from handwritten text, plus the integration of these gestures in a CATTI application. It becomes finally possible to create highly interactive stroke-based text-editing interfaces, without worrying to verify the user intent on-screen. We performed a formal evaluation with 22 e-pen users and 32 mouse users using a gesture vocabulary of 10 symbols. MinGestures achieved an outstanding accuracy (<1% error rate) with very high performance (<1 ms of recognition time). We then integrated MinGestures in a CATTI prototype and tested the performance of the interactive handwriting system when it is driven by gestures. Our results show that using gestures in interactive handwriting applications is both advantageous and convenient when gestures are simple but context-aware. Taken together, this work suggests that text-editing interfaces not only can be easily augmented with simple gestures, but also may substantially improve user productivity.This work has been supported by the European Commission through the 7th Framework Program (tranScriptorium: FP7- ICT-2011-9, project 600707 and CasMaCat: FP7-ICT-2011-7, project 287576). It has also been supported by the Spanish MINECO under grant TIN2012-37475-C02-01 (STraDa), and the Generalitat Valenciana under grant ISIC/2012/004 (AMIIS).Leiva, LA.; Alabau, V.; Romero Gómez, V.; Toselli, AH.; Vidal, E. (2015). Contex-aware gestures for mixed-initiative text editings UIs. Interacting with Computers. 27(6):675-696. https://doi.org/10.1093/iwc/iwu019S675696276Alabau V. Leiva L. A. Transcribing Handwritten Text Images with a Word Soup Game. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 2012.Alabau V. Rodríguez-Ruiz L. Sanchis A. Martínez-Gómez P. Casacuberta F. On Multimodal Interactive Machine Translation Using Speech Recognition. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011a.Alabau V. Sanchis A. Casacuberta F. Improving On-Line Handwritten Recognition using Translation Models in Multimodal Interactive Machine Translation. Proc. Assoc. Comput. Linguistics (ACL) 2011b.Alabau, V., Sanchis, A., & Casacuberta, F. (2014). Improving on-line handwritten recognition in interactive machine translation. Pattern Recognition, 47(3), 1217-1228. doi:10.1016/j.patcog.2013.09.035Anthony L. Wobbrock J. O. A Lightweight Multistroke Recognizer for User Interface Prototypes. Proc. Conf. Graph. Interface (GI). 2010.Anthony L. Wobbrock J. O. N-Protractor: a Fast and Accurate Multistroke Recognizer. Proc. Conf. Graph. Interface (GI) 2012.Anthony L. Vatavu R.-D. Wobbrock J. O. Understanding the Consistency of Users' Pen and Finger Stroke Gesture Articulation. Proc. Conf. Graph. Interface (GI). 2013.Appert C. Zhai S. Using Strokes as Command Shortcuts: Cognitive Benefits and Toolkit Support. Proc. SIGCHI Conf. Hum. Fact. Comput. Syst. (CHI) 2009.Bahlmann C. Haasdonk B. Burkhardt H. On-Line Handwriting Recognition with Support Vector Machines: A Kernel Approach. Proc. Int. Workshop Frontiers Handwriting Recognition (IWFHR). 2001.Bailly G. Lecolinet E. Nigay L. Flower Menus: a New Type of Marking Menu with Large Menu Breadth, within Groups and Efficient Expert Mode Memorization. Proc.Work. Conf. Adv. Vis. Interfaces (AVI) 2008.Balakrishnan R. Patel P. The PadMouse: Facilitating Selection and Spatial Positioning for the Non-Dominant Hand. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1998.Bau O. Mackay W. E. Octopocus: A Dynamic Guide for Learning Gesture-Based Command Sets. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2008.Belaid A. Haton J. A syntactic approach for handwritten formula recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:105-111.Bosch V. Bordes-Cabrera I. Munoz P. C. Hernández-Tornero C. Leiva L. A. Pastor M. Romero V. Toselli A. H. Vidal E. Transcribing a XVII Century Handwritten Botanical Specimen Book from Scratch. Proc. Int. Conf. Digital Access Textual Cultural Heritage (DATeCH). 2014.Buxton W. The natural language of interaction: a perspective on non-verbal dialogues. INFOR 1988;26:428-438.Cao X. Zhai S. Modeling Human Performance of Pen Stroke Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2007.Castro-Bleda M. J. España-Boquera S. Llorens D. Marzal A. Prat F. Vilar J. M. Zamora-Martinez F. Speech Interaction in a Multimodal Tool for Handwritten Text Transcription. Proc. Int. Conf. Multimodal Interfaces (ICMI) 2011.Connell S. D. Jain A. K. Template-based on-line character recognition. Pattern Recognition 2000;34:1-14.Costagliola G. Deufemia V. Polese G. Risi M. A Parsing Technique for Sketch Recognition Systems. Proc. 2004 IEEE Symp. Vis. Lang. Hum. Centric Comput. (VLHCC). 2004.Culotta, A., Kristjansson, T., McCallum, A., & Viola, P. (2006). Corrective feedback and persistent learning for information extraction. Artificial Intelligence, 170(14-15), 1101-1122. doi:10.1016/j.artint.2006.08.001Deepu V. Madhvanath S. Ramakrishnan A. Principal Component Analysis for Online Handwritten Character Recognition. Proc. Int. Conf. Pattern Recognition (ICPR). 2004.Delaye A. Sekkal R. Anquetil E. Continuous Marking Menus for Learning Cursive Pen-Based Gestures. Proc. Int. Conf. Intell. User Interfaces (IUI) 2011.Dimitriadis Y. Coronado J. Towards an art-based mathematical editor that uses on-line handwritten symbol recognition. Pattern Recognition 1995;8:807-822.El Meseery M. El Din M. F. Mashali S. Fayek M. Darwish N. Sketch Recognition Using Particle Swarm Algorithms. Proc. 16th IEEE Int. Conf. Image Process. (ICIP). 2009.Goldberg D. Goodisman A. Stylus User Interfaces for Manipulating Text. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 1991.Goldberg D. Richardson C. Touch-Typing with a Stylus. Proc. INTERCHI'93 Conf. Hum. Factors Comput. Syst. 1993.Stevens, M. E. (1968). Selected pattern recognition projects in Europe. Pattern Recognition, 1(2), 103-118. doi:10.1016/0031-3203(68)90002-2Hardock G. Design Issues for Line Driven Text Editing/ Annotation Systems. Proc. Conf. Graph. Interface (GI). 1991.Hardock G. Kurtenbach G. Buxton W. A Marking Based Interface for Collaborative Writing. Proc.ACM Symp. User Interface Softw. Technol. (UIST) 1993.Hinckley K. Baudisch P. Ramos G. Guimbretiere F. Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2005.Hong J. I. Landay J. A. SATIN: A Toolkit for Informal Ink-Based Applications. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2000.Horvitz E. Principles of Mixed-Initiative User Interfaces. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Huerst W. Yang J. Waibel A. Interactive Error Repair for an Online Handwriting Interface. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Jelinek F. Cambridge, Massachusetts: MIT Press; 1998. Statistical Methods for Speech Recognition.Johansson S. Atwell E. Garside R. Leech G. The Tagged LOB Corpus, User's Manual. Norwegian Computing Center for the Humanities. 1996.Karat C.-M. Halverson C. Horn D. Karat J. Patterns of Entry and Correction in Large Vocabulary Continuous Speech Recognition Systems. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Kerrick, D. D., & Bovik, A. C. (1988). Microprocessor-based recognition of handprinted characters from a tablet input. Pattern Recognition, 21(5), 525-537. doi:10.1016/0031-3203(88)90011-8Koschinski M. Winkler H. Lang M. Segmentation and Recognition of Symbols within Handwritten Mathematical Expressions. Proc. IEEE Int. Conf. Acoustics Speech Signal Process. (ICASSP). 1995.Kosmala A. Rigoll G. On-Line Handwritten Formula Recognition Using Statistical Methods. Proc. Int. Conf. Pattern Recognition (ICPR) 1998.Kristensson P. O. Discrete and continuous shape writing for text entry and control. 2007. Ph.D. Thesis, Linköping University, Sweden.Kristensson P. O. Denby L. C. Text Entry Performance of State of the Art Unconstrained Handwriting Recognition: a Longitudinal User Study. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2009.Kristensson P. O. Denby L. C. Continuous Recognition and Visualization of Pen Strokes and Touch-Screen Gestures. Proc. Eighth Eurograph. Symp. Sketch-Based Interfaces Model. (SBIM) 2011.Kristensson P. O. Zhai S. SHARK2: A Large Vocabulary Shorthand Writing System for Pen-Based Computers. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2004.Kurtenbach G. P. The design and evaluation of marking menus. 1991. Ph.D. Thesis, University of Toronto.Kurtenbach G. P. Buxton W. Issues in Combining Marking and Direct Manipulation Techniques. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 1991.Kurtenbach G. Buxton W. User Learning and Performance with Marking Menus. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 1994.Kurtenbach, G., Sellen, A., & Buxton, W. (1993). An Empirical Evaluation of Some Articulatory and Cognitive Aspects of Marking Menus. Human-Computer Interaction, 8(1), 1-23. doi:10.1207/s15327051hci0801_1LaLomia M. User Acceptance of Handwritten Recognition Accuracy. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 1994.Leiva L. A. Romero V. Toselli A. H. Vidal E. Evaluating an Interactive–Predictive Paradigm on Handwriting Transcription: A Case Study and Lessons Learned. Proc. 35th Annu. IEEE Comput. Softw. Appl. Conf. (COMPSAC) 2011.Leiva L. A. Alabau V. Vidal E. Error-Proof, High-Performance, and Context-Aware Gestures for Interactive Text Edition. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2013.Li Y. Protractor: A Fast and Accurate Gesture Recognizer. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Li W. Hammond T. Using Scribble Gestures to Enhance Editing Behaviors of Sketch Recognition Systems. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2012.Liao C. Guimbretière F. Hinckley K. Hollan J. Papiercraft: a gesture-based command system for interactive paper. ACM Trans. Comput.–Hum. Interaction (TOCHI) 2008;14:18:1-18:27.Liu P. Soong F. K. Word Graph Based Speech Rcognition Error Correction by Handwriting Input. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2006.Long A. Landay J. Rowe L. Implications for a Gesture Design Tool. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 1999.Long A. C. Jr. Landay J. A. Rowe L. A. Michiels J. Visual Similarity of Pen Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2000.MacKenzie, I. S., & Chang, L. (1999). A performance comparison of two handwriting recognizers. Interacting with Computers, 11(3), 283-297. doi:10.1016/s0953-5438(98)00030-7MacKenzie I. S. Tanaka-Ishii K. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.; 2007. Text Entry Systems: Mobility, Accessibility, Universality.MARTI, U.-V., & BUNKE, H. (2001). USING A STATISTICAL LANGUAGE MODEL TO IMPROVE THE PERFORMANCE OF AN HMM-BASED CURSIVE HANDWRITING RECOGNITION SYSTEM. International Journal of Pattern Recognition and Artificial Intelligence, 15(01), 65-90. doi:10.1142/s0218001401000848Marti, U.-V., & Bunke, H. (2002). The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1), 39-46. doi:10.1007/s100320200071Martín-Albo D. Romero V. Toselli A. H. Vidal E. Multimodal computer-assisted transcription of text images at character-level interaction. Int. J. Pattern Recogn. Artif. Intell. 2012;26:1-19.Marzinkewitsch R. Operating Computer Algebra Systems by Hand-Printed Input. Proc. Int. Symp. Symbolic Algebr. Comput. (ISSAC). 1991.Mas, J., Llados, J., Sanchez, G., & Jorge, J. A. P. (2010). A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams. Pattern Recognition, 43(12), 4148-4164. doi:10.1016/j.patcog.2010.07.003Moyle M. Cockburn A. Analysing Mouse and Pen Flick Gestures. Proc. SIGCHI-NZ Symp. Comput.–Hum. Interact. (CHINZ). 2002.Nakayama Y. A Prototype Pen-Input Mathematical Formula Editor. Proc. AACE EdMedia 1993.Ogata J. Goto M. Speech Repair: Quick Error Correction Just by Using Selection Operation for Speech Input Interface. Proc. Eurospeech. 2005.Ortiz-Martínez D. Leiva L. A. Alabau V. Casacuberta F. Interactive Machine Translation using a Web-Based Architecture. Proc. Int. Conf. Intell. User Interfaces (IUI) 2010.Ortiz-Martínez D. Leiva L. A. Alabau V. García-Varea I. Casacuberta F. An Interactive Machine Translation System with Online Learning. Proc. Assoc. Comput. Linguist. (ACL). 2011.Michael Powers, V. (1973). Pen direction sequences in character recognition. Pattern Recognition, 5(4), 291-302. doi:10.1016/0031-3203(73)90022-8Raab F. Extremely efficient menu selection: Marking menus for the Flash platform. 2009. Available at http://www.betriebsraum.de/blog/2009/07/21/efficient-gesture-recognition-and-corner-finding-in-as3/ (retrieved on May 2012).Revuelta-Martínez A. Rodríguez L. García-Varea I. A Computer Assisted Speech Transcription System. Proc. Eur. Chap. Assoc. Comput. Linguist. (EACL). 2012.Revuelta-Martínez, A., Rodríguez, L., García-Varea, I., & Montero, F. (2013). Multimodal interaction for information retrieval using natural language. Computer Standards & Interfaces, 35(5), 428-441. doi:10.1016/j.csi.2012.11.002Rodríguez L. García-Varea I. Revuelta-Martínez A. Vidal E. A Multimodal Interactive Text Generation System. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI). 2010a.Rodríguez L. García-Varea I. Vidal E. Multi-Modal Computer Assisted Speech Transcription. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI) 2010b.Romero V. Leiva L. A. Toselli A. H. Vidal E. Interactive Multimodal Transcription of Text Images using a Web-Based Demo System. Proc. Int. Conf. Intell. User Interfaces (IUI). 2009a.Romero V. Toselli A. H. Vidal E. Using Mouse Feedback in Computer Assisted Transcription of Handwritten Text Images. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2009b.Romero V. Toselli A. H. Vidal E. Study of Different Interactive Editing Operations in an Assisted Transcription System. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011.Romero V. Toselli A. H. Vidal E. Vol. 80. Singapore: World Scientific Publishing Company; 2012. Multimodal Interactive Handwritten Text Transcription.Rubine, D. (1991). Specifying gestures by example. ACM SIGGRAPH Computer Graphics, 25(4), 329-337. doi:10.1145/127719.122753Rubine D. H. 1991b. The automatic recognition of gestures. Ph.D. Thesis, Carnegie Mellon University.Sánchez-Sáez R. Leiva L. A. Sánchez J. A. Benedí J. M. Interactive Predictive Parsing using a Web-Based Architecture. Proc. North Am. Chap. Assoc. Comput. Linguist. 2010.Saund E. Fleet D. Larner D. Mahoney J. Perceptually-Supported Image Editing of Text and Graphics. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2003.Shilman M. Tan D. S. Simard P. CueTIP: a Mixed-Initiative Interface for Correcting Handwriting Errors. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2006.Signer B. Kurmann U. Norrie M. C. igesture: A General Gesture Recognition Framework. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2007.Smithies S. Novins K. Arvo J. A handwriting-based equation editor. Proc. Conf. Graph. Interface (GI). 1999.Suhm, B., Myers, B., & Waibel, A. (2001). Multimodal error correction for speech user interfaces. ACM Transactions on Computer-Human Interaction, 8(1), 60-98. doi:10.1145/371127.371166Tappert C. C. Mosley P. H. Recent advances in pen computing. 2001. Technical Report 166, Pace University, available: http://support.csis.pace.edu.Toselli, A. H., Romero, V., Pastor, M., & Vidal, E. (2010). Multimodal interactive transcription of text images. Pattern Recognition, 43(5), 1814-1825. doi:10.1016/j.patcog.2009.11.019Toselli A. H. Vidal E. Casacuberta F. , editors. Berlin, Heidelberg, New York: Springer; 2011. Multimodal-Interactive Pattern Recognition and Applications.Tseng S. Fogg B. Credibility and computing technology. Commun. ACM 1999;42:39-44.Vatavu R.-D. Anthony L. Wobbrock J. O. Gestures as Point Clouds: A P Recognizer for User Interface Prototypes. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2012.Vertanen K. Kristensson P. O. Parakeet: A Continuous Speech Recognition System for Mobile Touch-Screen Devices. Proc. Int. Conf. Intell. User Interfaces (IUI) 2009.Vidal E. Rodríguez L. Casacuberta F. García-Varea I. Mach. Learn. Multimodal Interact., Lect. Notes Comput. Sci. Vol. 4892. Berlin, Heidelberg: Springer; 2008. Interactive Pattern Recognition.Wang X. Li J. Ao X. Wang G. Dai G. Multimodal Error Correction for Continuous Handwriting Recognition in Pen-Based User Interfaces. Proc. Int. Conf. Intell. User Interfaces (IUI). 2006.Wang L. Hu T. Liu P. Soong F. K. Efficient Handwriting Correction of Speech Recognition Errors with Template Constrained Posterior (TCP). Proc. INTERSPEECH 2008.Wobbrock J. O. Wilson A. D. Li Y. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2007.Wolf C. G. Morrel-Samuels P. The use of hand-drawn gestures for text editing. Int. J. Man–Mach. Stud. 1987;27:91-102.Zeleznik R. Miller T. Fluid Inking: Augmenting the Medium of Free-Form Inking with Gestures. Proc. Conf. Graph. Interface (GI). 2006.Yong Zhang, McCullough, C., Sullins, J. R., & Ross, C. R. (2010). Hand-Drawn Face Sketch Recognition by Humans and a PCA-Based Algorithm for Forensic Applications. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 40(3), 475-485. doi:10.1109/tsmca.2010.2041654Zhao S. Balakrishnan R. Simple vs. Compound Mark Hierarchical Marking Menus. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2004

    A review of assistive technology and writing skills for students with physical and educational disabilities

    Get PDF
    In recent years effective instruction in reading for learners with physicaland educational disabilities has received great attention in the schools.However, instruction in the corollary skill of writing has received considerably less emphasis. This review paper notes that through the use of assistive technology, students with a variety of physical and educationaldisabilities can learn to effectively (a) plan and organize their writing,(b) draft and transcribe their work, and (c) edit and revise their narrativeand expository writing.With teachers increasingly being held accountable for the development ofliteracy skills in all students, including those students with physical and educational disabilities, schools are paying substantial and growing attention to reading. The expressive side of the literacy coin, writing skills, is arguably equally significant and worthy of instructional emphasis. However, there are growing indicators that writing has not received enough attention in the national educational reform debate (National Commission on Writing, 2003, 2006; National Writing Project & Nagin, 2006). Along with reading comprehension, writing skill is a powerful predictor of academic success (Graham & Perin, 2007), and is an effective means of developing higher-order thinking skills (National Writing Project & Nagin, 2006). Writing helps learners make sense of the world (e.g., “Letters from Ground Zero,” cited in National Commission on Writing, 2006). Yet to date the teaching of writing skills to students with disabilities, including physical disabilities, has not received the level of curricular emphasis that teaching reading skills has (Graham & Perin, 2007). For students with physical and educational disabilities, stronger writing skills offer a variety of benefits. These include (a) more successful academic inclusion outcomes, (b) transfer of improved literacy skills to reading, and (c) greater pass rates on high stakes academic testing. As more and more careers require greater levels of literacy skills, students with disabilities who are unable to write effectively may find themselves increasingly minimized in these adult roles. Writing is considered to be an essential “threshold skill” for hiring and promotion (National Commission on Writing, 2003), and is a basic requirement for participation in civic life and the global economy (Graham & Perin, 2007; National Commission on Writing, 2003). This paper reviews the use of a variety of assistive technologies in enhancing writing skills in students with physical and educational disabilities

    Contributions to Pen & Touch Human-Computer Interaction

    Full text link
    [EN] Computers are now present everywhere, but their potential is not fully exploited due to some lack of acceptance. In this thesis, the pen computer paradigm is adopted, whose main idea is to replace all input devices by a pen and/or the fingers, given that the origin of the rejection comes from using unfriendly interaction devices that must be replaced by something easier for the user. This paradigm, that was was proposed several years ago, has been only recently fully implemented in products, such as the smartphones. But computers are actual illiterates that do not understand gestures or handwriting, thus a recognition step is required to "translate" the meaning of these interactions to computer-understandable language. And for this input modality to be actually usable, its recognition accuracy must be high enough. In order to realistically think about the broader deployment of pen computing, it is necessary to improve the accuracy of handwriting and gesture recognizers. This thesis is devoted to study different approaches to improve the recognition accuracy of those systems. First, we will investigate how to take advantage of interaction-derived information to improve the accuracy of the recognizer. In particular, we will focus on interactive transcription of text images. Here the system initially proposes an automatic transcript. If necessary, the user can make some corrections, implicitly validating a correct part of the transcript. Then the system must take into account this validated prefix to suggest a suitable new hypothesis. Given that in such application the user is constantly interacting with the system, it makes sense to adapt this interactive application to be used on a pen computer. User corrections will be provided by means of pen-strokes and therefore it is necessary to introduce a recognizer in charge of decoding this king of nondeterministic user feedback. However, this recognizer performance can be boosted by taking advantage of interaction-derived information, such as the user-validated prefix. Then, this thesis focuses on the study of human movements, in particular, hand movements, from a generation point of view by tapping into the kinematic theory of rapid human movements and the Sigma-Lognormal model. Understanding how the human body generates movements and, particularly understand the origin of the human movement variability, is important in the development of a recognition system. The contribution of this thesis to this topic is important, since a new technique (which improves the previous results) to extract the Sigma-lognormal model parameters is presented. Closely related to the previous work, this thesis study the benefits of using synthetic data as training. The easiest way to train a recognizer is to provide "infinite" data, representing all possible variations. In general, the more the training data, the smaller the error. But usually it is not possible to infinitely increase the size of a training set. Recruiting participants, data collection, labeling, etc., necessary for achieving this goal can be time-consuming and expensive. One way to overcome this problem is to create and use synthetically generated data that looks like the human. We study how to create these synthetic data and explore different approaches on how to use them, both for handwriting and gesture recognition. The different contributions of this thesis have obtained good results, producing several publications in international conferences and journals. Finally, three applications related to the work of this thesis are presented. First, we created Escritorie, a digital desk prototype based on the pen computer paradigm for transcribing handwritten text images. Second, we developed "Gestures à Go Go", a web application for bootstrapping gestures. Finally, we studied another interactive application under the pen computer paradigm. In this case, we study how translation reviewing can be done more ergonomically using a pen.[ES] Hoy en día, los ordenadores están presentes en todas partes pero su potencial no se aprovecha debido al "miedo" que se les tiene. En esta tesis se adopta el paradigma del pen computer, cuya idea fundamental es sustituir todos los dispositivos de entrada por un lápiz electrónico o, directamente, por los dedos. El origen del rechazo a los ordenadores proviene del uso de interfaces poco amigables para el humano. El origen de este paradigma data de hace más de 40 años, pero solo recientemente se ha comenzado a implementar en dispositivos móviles. La lenta y tardía implantación probablemente se deba a que es necesario incluir un reconocedor que "traduzca" los trazos del usuario (texto manuscrito o gestos) a algo entendible por el ordenador. Para pensar de forma realista en la implantación del pen computer, es necesario mejorar la precisión del reconocimiento de texto y gestos. El objetivo de esta tesis es el estudio de diferentes estrategias para mejorar esta precisión. En primer lugar, esta tesis investiga como aprovechar información derivada de la interacción para mejorar el reconocimiento, en concreto, en la transcripción interactiva de imágenes con texto manuscrito. En la transcripción interactiva, el sistema y el usuario trabajan "codo con codo" para generar la transcripción. El usuario valida la salida del sistema proporcionando ciertas correcciones, mediante texto manuscrito, que el sistema debe tener en cuenta para proporcionar una mejor transcripción. Este texto manuscrito debe ser reconocido para ser utilizado. En esta tesis se propone aprovechar información contextual, como por ejemplo, el prefijo validado por el usuario, para mejorar la calidad del reconocimiento de la interacción. Tras esto, la tesis se centra en el estudio del movimiento humano, en particular del movimiento de las manos, utilizando la Teoría Cinemática y su modelo Sigma-Lognormal. Entender como se mueven las manos al escribir, y en particular, entender el origen de la variabilidad de la escritura, es importante para el desarrollo de un sistema de reconocimiento, La contribución de esta tesis a este tópico es importante, dado que se presenta una nueva técnica (que mejora los resultados previos) para extraer el modelo Sigma-Lognormal de trazos manuscritos. De forma muy relacionada con el trabajo anterior, se estudia el beneficio de utilizar datos sintéticos como entrenamiento. La forma más fácil de entrenar un reconocedor es proporcionar un conjunto de datos "infinito" que representen todas las posibles variaciones. En general, cuanto más datos de entrenamiento, menor será el error del reconocedor. No obstante, muchas veces no es posible proporcionar más datos, o hacerlo es muy caro. Por ello, se ha estudiado como crear y usar datos sintéticos que se parezcan a los reales. Las diferentes contribuciones de esta tesis han obtenido buenos resultados, produciendo varias publicaciones en conferencias internacionales y revistas. Finalmente, también se han explorado tres aplicaciones relaciones con el trabajo de esta tesis. En primer lugar, se ha creado Escritorie, un prototipo de mesa digital basada en el paradigma del pen computer para realizar transcripción interactiva de documentos manuscritos. En segundo lugar, se ha desarrollado "Gestures à Go Go", una aplicación web para generar datos sintéticos y empaquetarlos con un reconocedor de forma rápida y sencilla. Por último, se presenta un sistema interactivo real bajo el paradigma del pen computer. En este caso, se estudia como la revisión de traducciones automáticas se puede realizar de forma más ergonómica.[CA] Avui en dia, els ordinadors són presents a tot arreu i es comunament acceptat que la seva utilització proporciona beneficis. No obstant això, moltes vegades el seu potencial no s'aprofita totalment. En aquesta tesi s'adopta el paradigma del pen computer, on la idea fonamental és substituir tots els dispositius d'entrada per un llapis electrònic, o, directament, pels dits. Aquest paradigma postula que l'origen del rebuig als ordinadors prové de l'ús d'interfícies poc amigables per a l'humà, que han de ser substituïdes per alguna cosa més coneguda. Per tant, la interacció amb l'ordinador sota aquest paradigma es realitza per mitjà de text manuscrit i/o gestos. L'origen d'aquest paradigma data de fa més de 40 anys, però només recentment s'ha començat a implementar en dispositius mòbils. La lenta i tardana implantació probablement es degui al fet que és necessari incloure un reconeixedor que "tradueixi" els traços de l'usuari (text manuscrit o gestos) a alguna cosa comprensible per l'ordinador, i el resultat d'aquest reconeixement, actualment, és lluny de ser òptim. Per pensar de forma realista en la implantació del pen computer, cal millorar la precisió del reconeixement de text i gestos. L'objectiu d'aquesta tesi és l'estudi de diferents estratègies per millorar aquesta precisió. En primer lloc, aquesta tesi investiga com aprofitar informació derivada de la interacció per millorar el reconeixement, en concret, en la transcripció interactiva d'imatges amb text manuscrit. En la transcripció interactiva, el sistema i l'usuari treballen "braç a braç" per generar la transcripció. L'usuari valida la sortida del sistema donant certes correccions, que el sistema ha d'usar per millorar la transcripció. En aquesta tesi es proposa utilitzar correccions manuscrites, que el sistema ha de reconèixer primer. La qualitat del reconeixement d'aquesta interacció és millorada, tenint en compte informació contextual, com per exemple, el prefix validat per l'usuari. Després d'això, la tesi se centra en l'estudi del moviment humà en particular del moviment de les mans, des del punt de vista generatiu, utilitzant la Teoria Cinemàtica i el model Sigma-Lognormal. Entendre com es mouen les mans en escriure és important per al desenvolupament d'un sistema de reconeixement, en particular, per entendre l'origen de la variabilitat de l'escriptura. La contribució d'aquesta tesi a aquest tòpic és important, atès que es presenta una nova tècnica (que millora els resultats previs) per extreure el model Sigma- Lognormal de traços manuscrits. De forma molt relacionada amb el treball anterior, s'estudia el benefici d'utilitzar dades sintètiques per a l'entrenament. La forma més fàcil d'entrenar un reconeixedor és proporcionar un conjunt de dades "infinit" que representin totes les possibles variacions. En general, com més dades d'entrenament, menor serà l'error del reconeixedor. No obstant això, moltes vegades no és possible proporcionar més dades, o fer-ho és molt car. Per això, s'ha estudiat com crear i utilitzar dades sintètiques que s'assemblin a les reals. Les diferents contribucions d'aquesta tesi han obtingut bons resultats, produint diverses publicacions en conferències internacionals i revistes. Finalment, també s'han explorat tres aplicacions relacionades amb el treball d'aquesta tesi. En primer lloc, s'ha creat Escritorie, un prototip de taula digital basada en el paradigma del pen computer per realitzar transcripció interactiva de documents manuscrits. En segon lloc, s'ha desenvolupat "Gestures à Go Go", una aplicació web per a generar dades sintètiques i empaquetar-les amb un reconeixedor de forma ràpida i senzilla. Finalment, es presenta un altre sistema inter- actiu sota el paradigma del pen computer. En aquest cas, s'estudia com la revisió de traduccions automàtiques es pot realitzar de forma més ergonòmica.Martín-Albo Simón, D. (2016). Contributions to Pen & Touch Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68482TESI

    Interactive handwriting recognition with limited user effort

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10032-013-0204-5[EN] Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. Although post-editing automatic recognition of handwritten text is feasible, it is not clearly better than simply ignoring it and transcribing the document from scratch. A more effective approach is to follow an interactive approach in which both the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Nevertheless, in some applications, the user effort available to transcribe documents is limited and fully supervision of the system output is not realistic. To circumvent these problems, we propose a novel interactive approach which efficiently employs user effort to transcribe a document by improving three different aspects. Firstly, the system employs a limited amount of effort to solely supervise recognised words that are likely to be incorrect. Thus, user effort is efficiently focused on the supervision of words for which the system is not confident enough. Secondly, it refines the initial transcription provided to the user by recomputing it constrained to user supervisions. In this way, incorrect words in unsupervised parts can be automatically amended without user supervision. Finally, it improves the underlying system models by retraining the system from partially supervised transcriptions. In order to prove these statements, empirical results are presented on two real databases showing that the proposed approach can notably reduce user effort in the transcription of handwritten text in (old) documents.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No 287755 (transLectures). Also supported by the Spanish Government (MICINN, MITyC, "Plan E", under Grants MIPRCV "Consolider Ingenio 2010", MITTRAL (TIN2009-14633-C03-01), erudito.com (TSI-020110-2009-439), iTrans2 (TIN2009-14511), and FPU (AP2007-02867), and the Generalitat Valenciana (Grants Prometeo/2009/014 and GV/2010/067).Serrano Martinez Santos, N.; Giménez Pastor, A.; Civera Saiz, J.; Sanchis Navarro, JA.; Juan Císcar, A. (2014). Interactive handwriting recognition with limited user effort. International Journal on Document Analysis and Recognition. 17(1):47-59. https://doi.org/10.1007/s10032-013-0204-5S4759171Agua, M., Serrano, N., Civera, J., Juan, A.: Character-based handwritten text recognition of multilingual documents. In: Proceedings of Advances in Speech and Language Technologies for Iberian Languages (IBERSPEECH 2012), Madrid (Spain), pp. 187–196 (2012)Ahn, L.V., Maurer, B., Mcmillen, C., Abraham, D., Blum, M.: reCAPTCHA: human-based character recognition via web security measures. Science 321, 1465–1468 (2008)Barrachina, S., Bender, O., Casacuberta, F., Civera, J., Cubel, E., Khadivi, S., Lagarda, A.L., Ney, H., Tomás, J., Vidal, E.: Statistical approaches to computer-assisted translation. Comput. Linguist. 35(1), 3–28 (2009)Bertolami, R., Bunke, H.: Hidden markov model-based ensemble methods for offline handwritten text line recognition. Pattern Recognit. 41, 3452–3460 (2008)Bunke, H., Bengio, S., Vinciarelli, A.: Offline recognition of unconstrained handwritten texts using HMMs and statistical language models. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 709–720 (2004)Dreuw, P., Jonas, S., Ney, H.: White-space models for offline Arabic handwriting recognition. In: Proceedings of the 19th International Conference on, Pattern Recognition, pp. 1–4 (2008)Efron, B., Tibshirani, R.J.: An introduction to bootstrap. Chapman and Hall/CRC, London (1994)Fischer, A., Wuthrich, M., Liwicki, M., Frinken, V., Bunke, H., Viehhauser, G., Stolz, M.: Automatic transcription of handwritten medieval documents. In: Proceedings of the 15th International Conference on Virtual Systems and Multimedia, pp. 137–142 (2009)Frinken, V., Bunke, H.: Evaluating retraining rules for semi-supervised learning in neural network based cursive word recognition. In: Proceedings of the 10th International Conference on Document Analysis and Recognition, Barcelona (Spain), pp. 31–35 (2009)Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., Schmidhuber, J.: A novel connectionist system for unconstrained handwriting recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 855–868 (2009)Hakkani-Tür, D., Riccardi, G., Tur, G.: An active approach to spoken language processing. ACM Trans. Speech Lang. Process. 3, 1–31 (2006)Kristjannson, T., Culotta, A., Viola, P., McCallum, A.: Interactive information extraction with constrained conditional random fields. In: Proceedings of the 19th Natural Conference on Artificial Intelligence, San Jose, CA (USA), pp. 412–418 (2004)Laurence Likforman-Sulem, A.Z., Taconet, B.: Text line segmentation of historical documents: a survey. Int. J. Doc. Anal. Recognit. 9, 123–138 (2007)Le Bourgeois, F., Emptoz, H.: Debora: digital access to books of the renaissance. Int. J. Doc. Anal. Recognit. 9, 193–221 (2007)Levenshtein, V.I.: Binary codes capable of correcting deletions, insertions, and reversals. Sov. Phys. Dokl. 10(8), 707–710 (1966)Neal, R.M., Hinton, G.E.: Learning in graphical models. In: A View of the EM Algorithm That Justifies Incremental, Sparse, and Other Variants, Chap. MIT Press, Cambridge, MA, USA, pp. 355–368 (1999)Pérez, D., Tarazón, L., Serrano, N., Ramos-Terrades, O., Juan, A.: The GERMANA database. In: Proceedings of the 10th International Conference on Document Analysis and Recognition, Barcelona (Spain), pp. 301–305 (2009)Plötz, T., Fink, G.A.: Markov models for offline handwriting recognition: a survey. Int. J. Doc. Anal. Recognit. 12(4), 269–298 (2009)Quiniou, S., Cheriet, M., Anquetil, E.: Error handling approach using characterization and correction steps for handwritten document analysis. Int. J. Doc. Anal. Recognit. 15(2), 125–141 (2012)Rodríguez, L., García-Varea, I., Vidal, E.: Multi-modal computer assisted speech transcription. In: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ACM, New York, NY, USA, pp. 30:1–30:7 (2010)Serrano, N., Pérez, D., Sanchis, A., Juan, A.: Adaptation from partially supervised handwritten text transcriptions. In: Proceedings of the 11th International Conference on Multimodal Interfaces and the 6th Workshop on Machine Learning for Multimodal Interaction, Cambridge, MA (USA), pp. 289–292 (2009)Serrano, N., Castro, F., Juan, A.: The RODRIGO database. In: Proceedings of the 7th International Conference on Language Resources and Evaluation, Valleta (Malta), pp. 2709–2712 (2010)Serrano, N., Giménez, A., Sanchis, A., Juan, A.: Active learning strategies for handwritten text transcription. In: Proceedings of the 12th International Conference on Multimodal Interfaces and the 7th Workshop on Machine Learning for Multimodal, Interaction, Beijing (China) (2010)Serrano, N., Sanchis, A., Juan, A.: Balancing error and supervision effort in interactive-predictive handwriting recognition. In: Proceedings of the 15th International Conference on Intelligent User Interfaces, Hong Kong (China), pp. 373–376 (2010)Serrano, N., Tarazón, L., Pérez, D., Ramos-Terrades, O., Juan, A.: The GIDOC prototype. In: Proceedings of the 10th International Workshop on Pattern Recognition in Information Systems, Funchal (Portugal), pp. 82–89 (2010)Settles, B.: Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison (2009)Tarazón, L., Pérez, D., Serrano, N., Alabau, V., Ramos-Terrades, O., Sanchis, A., Juan, A.: Confidence measures for error correction in interactive transcription of handwritten text. In: Proceedings of the 15th International Conference on Image Analysis, Processing, Vietri sul Mare (Italy) (2009)Toselli, A., Juan, A., Keysers, D., González, J., Salvador, I., Ney, H., Vidal, E., Casacuberta, F.: Integrated handwriting recognition and interpretation using finite-state models. Int. J. Pattern Recognit. Artif. Intell. 18(4), 519–539 (2004)Toselli, A., Romero, V., Rodríguez, L., Vidal, E.: Computer assisted transcription of handwritten text. In: Proceedings of the 9th International Conference on Document Analysis and Recognition, Curitiba (Brazil), pp. 944–948 (2007)Valor, J., Pérez, A., Civera, J., Juan, A.: Integrating a state-of-the-art ASR system into the opencast Matterhorn platform. In: Proceedings of the Advances in Speech and Language Technologies for Iberian Languages (IBERSPEECH 2012), Madrid (Spain), pp. 237–246 (2012)Wessel, F., Ney, H.: Unsupervised training of acoustic models for large vocabulary continuous speech recognition. IEEE Trans Speech Audio Process 13(1), 23–31 (2005

    Does Speech-To-Text Assistive Technology Improve the Written Expression of Students with Traumatic Brain Injury?

    Get PDF
    Traumatic Brain Injury outcomes vary by individual due to age at the onset of injury, the location of the injury, and the degree to which the deficits appear to be pronounced, among other factors. As an acquired injury to the brain, the neurophysiological consequences are not homogenous; they are as varied as the individuals who experience them. Persistent impairment in executive functions of attention, initiation, planning, organizing, and memory are likely to be present in children with moderate to severe TBIs. Issues with sensory and motor skills, language, auditory or visual sensation changes, and variations in emotional behavior may also be present. Germane to this study, motor dysfunction is a common long-term sequelae of TBI that manifests in academic difficulties. Borrowing from the learning disability literature, children with motor dysfunction are likely to have transcription deficits, or deficits related to the fine-motor production of written language. This study aimed to compare the effects of handwriting with an assistive technology accommodation on the writing performance of three middle school students with TBIs and writing difficulties. The study utilized an alternating treatments design (ATD), comparing the effects of handwriting responses to story prompts to the use of speech-to-text AT to record participant responses. Speech-to-text technology, like Dragon Naturally Speaking converts spoken language into a print format on a computer screen with a high degree of accuracy. In theory, because less effort is spent on transcription, there is a reduction in cognitive load, enabling more time to be spent on generation skills, such as idea development, selecting more complex words that might be otherwise difficult to spell, and grammar. Overall, all three participants showed marked improvement with the application of speech-to-text AT. The results indicate a positive pattern for the AT as an accommodation with these children that have had mild-to-moderate TBIs as compared to their written output without the AT accommodation. The findings of this study are robust. Through visual analysis of the results, it is evident that the speech-to-text dictation condition was far superior to the handwriting condition (HW) with an effect size that ranged + 3.4 to + 8.8 across participants indicating a large treatment effect size. Perhaps more impressive, was 100 percent non-overlap of data between the two conditions across participants and dependent variables. The application of speech-to-text AT resulted in significantly improved performance across writing indicators in these students with a history of TBIs. Speech-to-Text AT may prove to be an excellent accommodation for children with TBI and fine motor skill deficits. The conclusions drawn from the results of this study indicate the Speech-to-Text AT was more effective than a handwriting condition for all three participants. By providing this AT, these students each improved in the quality, construction, and duration of their written expression as evidenced in the significant gains in TWW, WSC, and CWS

    Using Speech Recognition Software to Increase Writing Fluency for Individuals with Physical Disabilities

    Get PDF
    Writing is an important skill that is necessary throughout school and life. Many students with physical disabilities, however, have difficulty with writing skills due to disability-specific factors, such as motor coordination problems. Due to the difficulties these individuals have with writing, assistive technology is often utilized. One piece of assistive technology, speech recognition software, may help remove the motor demand of writing and help students become more fluent writers. Past research on the use of speech recognition software, however, reveals little information regarding its impact on individuals with physical disabilities. Therefore, this study involved students of high school age with physical disabilities that affected hand use. Using an alternating treatments design to compare the use of word processing with the use of speech recognition software, this study analyzed first-draft writing samples in the areas of fluency, accuracy, type of word errors, recall of intended meaning, and length. Data on fluency, calculated in words correct per minute (wcpm) indicated that all participants wrote much faster with speech recognition compared to word processing. However, accuracy, calculated as percent correct, was much lower when participants used speech recognition compared to word processing. Word errors and recall of intended meaning were coded based on type and varied across participants. In terms of length, all participants wrote longer drafts when using speech recognition software, primarily because their fluency was higher, and they were able, therefore, to write more words. Although the results of this study indicated that participants wrote more fluently with speech recognition, because their accuracy was low, it is difficult to determine whether or not speech recognition is a viable solution for all individuals with physical disabilities. Therefore, additional research is needed that takes into consideration the editing and error correction time when using speech recognition software

    Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures

    Full text link
    Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007, the Universitat Politècnica de València (UPV) implemented its poliMedia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011, it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMedia, transLectures and Docència en Xarxa (Teaching Online), the UPV s action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles.Valor Miró, JD.; Spencer, RN.; Pérez González De Martos, AM.; Garcés Díaz-Munío, GV.; Turró Ribalta, C.; Civera Saiz, J.; Juan Císcar, A. (2014). Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures. Open Learning: The Journal of Open and Distance Learning. 29(1):72-85. doi:10.1080/02680513.2014.909722S7285291Fujii, A., Itou, K., & Ishikawa, T. (2006). LODEM: A system for on-demand video lectures. Speech Communication, 48(5), 516-531. doi:10.1016/j.specom.2005.08.006Gilbert, M., Knight, K., & Young, S. (2008). Spoken Language Technology [From the Guest Editors]. IEEE Signal Processing Magazine, 25(3), 15-16. doi:10.1109/msp.2008.918412Leggetter, C. J., & Woodland, P. C. (1995). Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech & Language, 9(2), 171-185. doi:10.1006/csla.1995.0010Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction Design Centered HCI - CHINZ ’08. (2008). doi:10.1145/1496976Martinez-Villaronga, A., del Agua, M. A., Andres-Ferrer, J., & Juan, A. (2013). Language model adaptation for video lectures transcription. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. doi:10.1109/icassp.2013.6639314Munteanu, C., Baecker, R., & Penn, G. (2008). Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08. doi:10.1145/1357054.1357117Repp, S., Gross, A., & Meinel, C. (2008). Browsing within Lecture Videos Based on the Chain Index of Speech Transcription. IEEE Transactions on Learning Technologies, 1(3), 145-156. doi:10.1109/tlt.2008.22Proceedings of the 2012 ACM international conference on Intelligent User Interfaces - IUI ’12. (2012). doi:10.1145/2166966Serrano, N., Giménez, A., Civera, J., Sanchis, A., & Juan, A. (2013). Interactive handwriting recognition with limited user effort. International Journal on Document Analysis and Recognition (IJDAR), 17(1), 47-59. doi:10.1007/s10032-013-0204-5Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., & Ramos Castro, D. (Eds.). (2012). Advances in Speech and Language Technologies for Iberian Languages. Communications in Computer and Information Science. doi:10.1007/978-3-642-35292-8Wald, M. (2006). Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Interactive Technology and Smart Education, 3(2), 131-141. doi:10.1108/1741565068000005

    Expanding the Direct and Indirect Effects Model of Writing (DIEW) : Reading–writing relations, and dynamic relations as a function of measurement/dimensions of written composition

    Get PDF
    Within the context of the Direct and Indirect Effects Model of Writing (Kim & Park, 2019), we examined a dynamic relations hypothesis, which contends that the relations of component skills, including reading comprehension, to written composition vary as a function of dimensions of written composition. Specifically, we investigated (a) whether higher-order cognitive skills (i.e., inference, perspective taking, and monitoring) are differentially related to three dimensions of written composition—writing quality, writing productivity, and correctness in writing; (b) whether reading comprehension is differentially related to the three dimensions of written composition after accounting for oral language, cognition, and transcription skills, and whether reading comprehension mediates the relations of discourse oral language and lexical literacy to the three dimensions of written composition; and (c) whether total effects of oral language, cognition, transcription, and reading comprehension vary for the three dimensions of written composition. Structural equation model results from 350 English-speaking second graders showed that higher-order cognitive skills were differentially related to the three dimensions of written composition. Reading comprehension was related only to writing quality, but not to writing productivity or correctness in writing, and reading comprehension differentially mediated the relations of discourse oral language and lexical literacy to writing quality. Total effects of language, cognition, transcription, and reading comprehension varied largely for the three dimensions of written composition. These results support the dynamic relation hypothesis, role of reading in writing, and the importance of accounting for dimensions of written composition in a theoretical model of writing. (PsycInfo Database Record (c) 2022 APA, all rights reserved
    corecore