5 research outputs found

    A Web-Based Demo to Interactive Multimodal Transcription of Historic Text images

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-04346-8_58[EN] Paleography experts spend many hours transcribing historic documents, and state-of-the-art handwritten text recognition systems are not suitable for performing this task automatically. In this paper we present the modifications oil a previously developed interactive framework for transcription of handwritten text. This system, rather than full automation, aimed at assisting the user with the recognition-transcription process.This work has been supported by the EC (FEDER), the Spanish MEC under grant TIN2006-15694-C02-01 and the research programme Consolider Ingenio 2010 MIPRCV (CSD2007-00018) and by the UPV (FPI fellowship 2006-04).Romero Gómez, V.; Leiva Torres, LA.; Alabau Gonzalvo, V.; Toselli, AH.; Vidal Ruiz, E. (2009). A Web-Based Demo to Interactive Multimodal Transcription of Historic Text images. En Research and Advanced Technology for Digital Libraries: 13th European Conference, ECDL 2009, Corfu, Greece, September 27 - October 2, 2009. Proceedings. Springer Verlag (Germany). 459-460. https://doi.org/10.1007/978-3-642-04346-8_58S459460Toselli, A.H., et al.: Computer assisted transcription of handwritten text. In: Proc. of ICDAR 2007, pp. 944–948. IEEE Computer Society, Los Alamitos (2007)Romero, V., Toselli, A.H., Rodríguez, L., Vidal, E.: Computer assisted transcription for ancient text images. In: Kamel, M.S., Campilho, A. (eds.) ICIAR 2007. LNCS, vol. 4633, pp. 1182–1193. Springer, Heidelberg (2007)Toselli, A.H., et al.: Computer assisted transcription of text images and multimodal interaction. In: Popescu-Belis, A., Stiefelhagen, R. (eds.) MLMI 2008. LNCS, vol. 5237, pp. 296–308. Springer, Heidelberg (2008)Romero, V.: et al.: Interactive multimodal transcription of text images using a web-based demo system. In: Proc. of the IUI, Florida, pp. 477–478 (2009)Romero, V., et al.: Improvements in the computer assisted transciption system of handwritten text images. In: Proc. of the PRIS 2008, pp. 103–112 (2008

    Multimodal Interactive Parsing

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38628-2_57Probabilistic parsing is a fundamental problem in Computational Linguistics, whose goal is obtaining a syntactic structure associated to a sentence according to a probabilistic grammatical model. Recently, an interactive framework for probabilistic parsing has been introduced, in which the user and the system cooperate to generate error-free parse trees. In an early prototype developed according to this interactive parsing technology, user feedback was provided by means of mouse actions and keyboard strokes. Here we augment the interaction style with support for (non-deterministic) natural handwritten recognition, and provide confidence measures as a visual aid to ease the correction process. Handwriting input seems to be a modality specially suitable for parsing, since the vocabulary size involved in the recognition of syntactic labels is fairly limited and thus intuitively errors should be small. However, errors may increase as handwriting quality (i.e., calligraphy) degrades. To solve this problem, we introduce a late fusion approach that leverages both on-line and off-line information, corresponding to pen strokes and contextual information from the parse trees. We demonstrate that late fusion can effectively help to disambiguate user intention and improve system accuracy.This research has received funding from the EC’s 7th Framework Programme (FP7/2007-13) under grant agreement No.287576- CasMaCat; from the Spanish MEC under the STraDA project (TIN2012-37475- C02-01) and the MITTRAL project (TIN2009-14633-C03-01); from the GV under the Prometeo project; and from the Universidad del Cauca (Colombia)Benedí Ruiz, JM.; Sánchez Peiró, JA.; Leiva, LA.; Sánchez Sáez, R.; Maca, M. (2013). Multimodal Interactive Parsing. En Pattern Recognition and Image Analysis. Springer. 484-491. https://doi.org/10.1007/978-3-642-38628-2_57S484491Afonso, S., Bick, E., Haber, R., Santos, D.: Floresta sintá(c)tica: a treebank for portuguese. In: Proc. LREC, pp. 1698–1703 (2002)Brants, T., Plaehn, O.: Interactive corpus annotation. In: Proc. LREC (2000)Guyon, I., Schomaker, L., Plamondon, R., Liberman, M., Janet, S.: UNIPEN project of on-line data exchange and recognizer benchmarks. In: Proc. ICPR, pp. 29–33 (1994)Lease, M., Charniak, E., Johnson, M., McClosky, D.: A look at parsing and its applications. In: Proc. AAAI, pp. 1642–1645 (2006)Marcus, M.P., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19(2), 313–330 (1993)Ortiz, D., Leiva, L.A., Alabau, V., Casacuberta, F.: Interactive machine translation using a web-based architecture. In: Proc. IUI, pp. 423–425 (2010)Romero, V., Leiva, L.A., Toselli, A.H., Vidal, E.: Interactive multimodal transcription of text images using a web-based demo system. In: Proc. IUI, pp. 477–478 (2009)Sánchez-Sáez, R., Leiva, L.A., Sánchez, J.A., Benedí, J.M.: Interactive predictive parsing using a web-based architecture. In: Proc. NAACL-HLT, pp. 37–40 (2010)Sánchez-Sáez, R., Sánchez, J.A., Benedí, J.M.: Interactive predictive parsing. In: Proc. IWPT, pp. 222–225 (2009)Sánchez-Sáez, R., Sánchez, J.A., Benedí, J.M.: Confidence measures for error discrimination in an interactive predictive parsing framework. In: Proc. COLING, pp. 1220–1228 (2010

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Multimodal output combination for transcribing historical handwritten documents

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-23192-1_21Transcription of digitalised historical documents is an interesting task in the document analysis area. This transcription can be achieved by using Handwritten Text Recognition (HTR) on digitalised pages or by using Automatic Speech Recognition (ASR) on the dictation of contents. Moreover, another option is using both systems in a multimodal combination to obtain a draft transcription, given that combining the outputs of different recognition systems will generally improve the recognition accuracy. In this work, we present a new combination method based on Confusion Network. We check its effectiveness for transcribing a Spanish historical book. Results on both unimodal combination with different optical (for HTR) and acoustic (for ASR) models, and multimodal combination, show a relative reduction of Word and Character Error Rate of 14.3% and 16.6%, respectively, over the HTR baseline.Work partially supported by European Union -7th FP, under grant 600707 (tranScriptorium), and by the Spanish MEC under projects STraDA (TIN2012-37475-C02-01), Active2Trans (TIN2012-31723), and SmartWays (RTC-2014-1466-4).Granell Romero, E.; Martínez-Hinarejos, C. (2015). Multimodal output combination for transcribing historical handwritten documents. En Computer Analysis of Images and Patterns. Springer. 246-260. https://doi.org/10.1007/978-3-319-23192-1_21S246260Alabau, V., Martínez-Hinarejos, C.D., Romero, V., Lagarda, A.L.: An iterative multimodal framework for the transcription of handwritten historical documents. Pattern Recognition Letters 35, 195–203 (2014)Bertolami, R., Halter, B., Bunke, H.: Combination of multiple handwritten text line recognition systems with a recursive approach. In: Proc. Int. Conf. Frontiers Handwriting Recognition, pp. 61–65 (2006)Bisani, M., Ney, H.: Bootstrap estimates for confidence intervals in ASR performance evaluation. In: Proc. of Int. Conf. on Acoustics, Speech and Signal Processing, vol. 1, pp. 409–412 (2004)Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. Tech. rep., IDIAP-RR 02–46, IDIAP (2002)Dreuw, P., Jonas, S., Ney, H.: White-space models for offline Arabic handwriting recognition. In: Proc. of Int. Conf. on Pattern Recognition, pp. 1–4 (2008)Hermansky, H., Ellis, D.P., Sharma, S.: Tandem connectionist feature extraction for conventional HMM systems. In: Proc. of Int. Conf. Acoustics, Speech and Signal Processing, vol. 3, pp. 1635–1638 (2000)Ishimaru, S., Nishizaki, H., Sekiguchi, Y.: Effect of confusion network combination on speech recognition system for editing. In: Proc. of APSIPA Annual Summit and Conf., vol. 4, pp. 1–4 (2011)Johnson, D.: ICSI Quicknet soft package (2004). http://www1.icsi.berkeley.edu/Speech/qn.htmlKneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: Proc. of Int. Conf. Acoustics, Speech and Signal Processing, vol. 1, pp. 181–184 (1995)Krishnamurthy, H.K.: Study of algorithms to combine multiple automatic speech recognition (ASR) system outputs. Master’s thesis, Department of Electrical and Computer Engineering (2009). http://hdl.handle.net/2047/d10019273Luján-Mares, M., Tamarit, V., Alabau, V., Martínez-Hinarejos, C.D., Pastor i Gadea, M., Sanchis, A., Toselli, A.H.: iATROS: a speech and handwritting recognition system. In: V Jornadas en Tecnologías del Habla (VJTH2008), pp. 75–78 (2008)Moreno, A., Poch, D., Bonafonte, A., Lleida, E., Llisterri, J., Mariño, J.B., Nadeu, C.: Albayzin speech database: design of the phonetic corpus. In: Proc. of EuroSpeech 1993, pp. 175–178 (1993)Netpbm home page. http://netpbm.sourceforge.net/Plamondon, R., Srihari, S.N.: On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 63–84 (2000)Romero, V., Leiva, L.A., Toselli, A.H., Vidal, E.: Interactive multimodal transcription of text images using a web-based demo system. In: Proc. of Conf. on Intelligent User Interfaces, pp. 477–478 (2009)Serrano, N., Castro, F., Juan, A.: The RODRIGO Database. In: Proc. of Language Resources and Evaluation Conference, pp. 2709–2712 (2010)Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proc. Interspeech, pp. 901–904 (2002)Woodruff, P., Dupont, S.: Bimodal combination of speech and handwriting for improved word recognition. In: Proc. of EUSIPCO 2005, pp. 1918–1921 (2005)Xue, J., Zhao, Y.: Improved confusion network algorithm and shortest path search from word lattice. In: Proc. of Int. Conf. in Acoustics, Speech and Signal Processing, vol. 1, pp. 853–856 (2005)Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., Moore, G., Odell, J., Ollason, D., Povey, D., et al.: The HTK book (for HTK version 3.4). Cambridge university Eng. Dept. (2006)Zhai, C., Lafferty, J.: A study of smoothing methods for language models applied to information retrieval. Transactions on Information Systems 22(2), 179–214 (2004

    Contex-aware gestures for mixed-initiative text editings UIs

    Full text link
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in Interacting with computers following peer review. The version of record is available online at: http://dx.doi.org/10.1093/iwc/iwu019[EN] This work is focused on enhancing highly interactive text-editing applications with gestures. Concretely, we study Computer Assisted Transcription of Text Images (CATTI), a handwriting transcription system that follows a corrective feedback paradigm, where both the user and the system collaborate efficiently to produce a high-quality text transcription. CATTI-like applications demand fast and accurate gesture recognition, for which we observed that current gesture recognizers are not adequate enough. In response to this need we developed MinGestures, a parametric context-aware gesture recognizer. Our contributions include a number of stroke features for disambiguating copy-mark gestures from handwritten text, plus the integration of these gestures in a CATTI application. It becomes finally possible to create highly interactive stroke-based text-editing interfaces, without worrying to verify the user intent on-screen. We performed a formal evaluation with 22 e-pen users and 32 mouse users using a gesture vocabulary of 10 symbols. MinGestures achieved an outstanding accuracy (<1% error rate) with very high performance (<1 ms of recognition time). We then integrated MinGestures in a CATTI prototype and tested the performance of the interactive handwriting system when it is driven by gestures. Our results show that using gestures in interactive handwriting applications is both advantageous and convenient when gestures are simple but context-aware. Taken together, this work suggests that text-editing interfaces not only can be easily augmented with simple gestures, but also may substantially improve user productivity.This work has been supported by the European Commission through the 7th Framework Program (tranScriptorium: FP7- ICT-2011-9, project 600707 and CasMaCat: FP7-ICT-2011-7, project 287576). It has also been supported by the Spanish MINECO under grant TIN2012-37475-C02-01 (STraDa), and the Generalitat Valenciana under grant ISIC/2012/004 (AMIIS).Leiva, LA.; Alabau, V.; Romero Gómez, V.; Toselli, AH.; Vidal, E. (2015). Contex-aware gestures for mixed-initiative text editings UIs. Interacting with Computers. 27(6):675-696. https://doi.org/10.1093/iwc/iwu019S675696276Alabau V. Leiva L. A. Transcribing Handwritten Text Images with a Word Soup Game. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 2012.Alabau V. Rodríguez-Ruiz L. Sanchis A. Martínez-Gómez P. Casacuberta F. On Multimodal Interactive Machine Translation Using Speech Recognition. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011a.Alabau V. Sanchis A. Casacuberta F. Improving On-Line Handwritten Recognition using Translation Models in Multimodal Interactive Machine Translation. Proc. Assoc. Comput. Linguistics (ACL) 2011b.Alabau, V., Sanchis, A., & Casacuberta, F. (2014). Improving on-line handwritten recognition in interactive machine translation. Pattern Recognition, 47(3), 1217-1228. doi:10.1016/j.patcog.2013.09.035Anthony L. Wobbrock J. O. A Lightweight Multistroke Recognizer for User Interface Prototypes. Proc. Conf. Graph. Interface (GI). 2010.Anthony L. Wobbrock J. O. N-Protractor: a Fast and Accurate Multistroke Recognizer. Proc. Conf. Graph. Interface (GI) 2012.Anthony L. Vatavu R.-D. Wobbrock J. O. Understanding the Consistency of Users' Pen and Finger Stroke Gesture Articulation. Proc. Conf. Graph. Interface (GI). 2013.Appert C. Zhai S. Using Strokes as Command Shortcuts: Cognitive Benefits and Toolkit Support. Proc. SIGCHI Conf. Hum. Fact. Comput. Syst. (CHI) 2009.Bahlmann C. Haasdonk B. Burkhardt H. On-Line Handwriting Recognition with Support Vector Machines: A Kernel Approach. Proc. Int. Workshop Frontiers Handwriting Recognition (IWFHR). 2001.Bailly G. Lecolinet E. Nigay L. Flower Menus: a New Type of Marking Menu with Large Menu Breadth, within Groups and Efficient Expert Mode Memorization. Proc.Work. Conf. Adv. Vis. Interfaces (AVI) 2008.Balakrishnan R. Patel P. The PadMouse: Facilitating Selection and Spatial Positioning for the Non-Dominant Hand. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1998.Bau O. Mackay W. E. Octopocus: A Dynamic Guide for Learning Gesture-Based Command Sets. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2008.Belaid A. Haton J. A syntactic approach for handwritten formula recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:105-111.Bosch V. Bordes-Cabrera I. Munoz P. C. Hernández-Tornero C. Leiva L. A. Pastor M. Romero V. Toselli A. H. Vidal E. Transcribing a XVII Century Handwritten Botanical Specimen Book from Scratch. Proc. Int. Conf. Digital Access Textual Cultural Heritage (DATeCH). 2014.Buxton W. The natural language of interaction: a perspective on non-verbal dialogues. INFOR 1988;26:428-438.Cao X. Zhai S. Modeling Human Performance of Pen Stroke Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2007.Castro-Bleda M. J. España-Boquera S. Llorens D. Marzal A. Prat F. Vilar J. M. Zamora-Martinez F. Speech Interaction in a Multimodal Tool for Handwritten Text Transcription. Proc. Int. Conf. Multimodal Interfaces (ICMI) 2011.Connell S. D. Jain A. K. Template-based on-line character recognition. Pattern Recognition 2000;34:1-14.Costagliola G. Deufemia V. Polese G. Risi M. A Parsing Technique for Sketch Recognition Systems. Proc. 2004 IEEE Symp. Vis. Lang. Hum. Centric Comput. (VLHCC). 2004.Culotta, A., Kristjansson, T., McCallum, A., & Viola, P. (2006). Corrective feedback and persistent learning for information extraction. Artificial Intelligence, 170(14-15), 1101-1122. doi:10.1016/j.artint.2006.08.001Deepu V. Madhvanath S. Ramakrishnan A. Principal Component Analysis for Online Handwritten Character Recognition. Proc. Int. Conf. Pattern Recognition (ICPR). 2004.Delaye A. Sekkal R. Anquetil E. Continuous Marking Menus for Learning Cursive Pen-Based Gestures. Proc. Int. Conf. Intell. User Interfaces (IUI) 2011.Dimitriadis Y. Coronado J. Towards an art-based mathematical editor that uses on-line handwritten symbol recognition. Pattern Recognition 1995;8:807-822.El Meseery M. El Din M. F. Mashali S. Fayek M. Darwish N. Sketch Recognition Using Particle Swarm Algorithms. Proc. 16th IEEE Int. Conf. Image Process. (ICIP). 2009.Goldberg D. Goodisman A. Stylus User Interfaces for Manipulating Text. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 1991.Goldberg D. Richardson C. Touch-Typing with a Stylus. Proc. INTERCHI'93 Conf. Hum. Factors Comput. Syst. 1993.Stevens, M. E. (1968). Selected pattern recognition projects in Europe. Pattern Recognition, 1(2), 103-118. doi:10.1016/0031-3203(68)90002-2Hardock G. Design Issues for Line Driven Text Editing/ Annotation Systems. Proc. Conf. Graph. Interface (GI). 1991.Hardock G. Kurtenbach G. Buxton W. A Marking Based Interface for Collaborative Writing. Proc.ACM Symp. User Interface Softw. Technol. (UIST) 1993.Hinckley K. Baudisch P. Ramos G. Guimbretiere F. Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2005.Hong J. I. Landay J. A. SATIN: A Toolkit for Informal Ink-Based Applications. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2000.Horvitz E. Principles of Mixed-Initiative User Interfaces. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Huerst W. Yang J. Waibel A. Interactive Error Repair for an Online Handwriting Interface. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Jelinek F. Cambridge, Massachusetts: MIT Press; 1998. Statistical Methods for Speech Recognition.Johansson S. Atwell E. Garside R. Leech G. The Tagged LOB Corpus, User's Manual. Norwegian Computing Center for the Humanities. 1996.Karat C.-M. Halverson C. Horn D. Karat J. Patterns of Entry and Correction in Large Vocabulary Continuous Speech Recognition Systems. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Kerrick, D. D., & Bovik, A. C. (1988). Microprocessor-based recognition of handprinted characters from a tablet input. Pattern Recognition, 21(5), 525-537. doi:10.1016/0031-3203(88)90011-8Koschinski M. Winkler H. Lang M. Segmentation and Recognition of Symbols within Handwritten Mathematical Expressions. Proc. IEEE Int. Conf. Acoustics Speech Signal Process. (ICASSP). 1995.Kosmala A. Rigoll G. On-Line Handwritten Formula Recognition Using Statistical Methods. Proc. Int. Conf. Pattern Recognition (ICPR) 1998.Kristensson P. O. Discrete and continuous shape writing for text entry and control. 2007. Ph.D. Thesis, Linköping University, Sweden.Kristensson P. O. Denby L. C. Text Entry Performance of State of the Art Unconstrained Handwriting Recognition: a Longitudinal User Study. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2009.Kristensson P. O. Denby L. C. Continuous Recognition and Visualization of Pen Strokes and Touch-Screen Gestures. Proc. Eighth Eurograph. Symp. Sketch-Based Interfaces Model. (SBIM) 2011.Kristensson P. O. Zhai S. SHARK2: A Large Vocabulary Shorthand Writing System for Pen-Based Computers. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2004.Kurtenbach G. P. The design and evaluation of marking menus. 1991. Ph.D. Thesis, University of Toronto.Kurtenbach G. P. Buxton W. Issues in Combining Marking and Direct Manipulation Techniques. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 1991.Kurtenbach G. Buxton W. User Learning and Performance with Marking Menus. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 1994.Kurtenbach, G., Sellen, A., & Buxton, W. (1993). An Empirical Evaluation of Some Articulatory and Cognitive Aspects of Marking Menus. Human-Computer Interaction, 8(1), 1-23. doi:10.1207/s15327051hci0801_1LaLomia M. User Acceptance of Handwritten Recognition Accuracy. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 1994.Leiva L. A. Romero V. Toselli A. H. Vidal E. Evaluating an Interactive–Predictive Paradigm on Handwriting Transcription: A Case Study and Lessons Learned. Proc. 35th Annu. IEEE Comput. Softw. Appl. Conf. (COMPSAC) 2011.Leiva L. A. Alabau V. Vidal E. Error-Proof, High-Performance, and Context-Aware Gestures for Interactive Text Edition. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2013.Li Y. Protractor: A Fast and Accurate Gesture Recognizer. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Li W. Hammond T. Using Scribble Gestures to Enhance Editing Behaviors of Sketch Recognition Systems. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2012.Liao C. Guimbretière F. Hinckley K. Hollan J. Papiercraft: a gesture-based command system for interactive paper. ACM Trans. Comput.–Hum. Interaction (TOCHI) 2008;14:18:1-18:27.Liu P. Soong F. K. Word Graph Based Speech Rcognition Error Correction by Handwriting Input. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2006.Long A. Landay J. Rowe L. Implications for a Gesture Design Tool. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 1999.Long A. C. Jr. Landay J. A. Rowe L. A. Michiels J. Visual Similarity of Pen Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2000.MacKenzie, I. S., & Chang, L. (1999). A performance comparison of two handwriting recognizers. Interacting with Computers, 11(3), 283-297. doi:10.1016/s0953-5438(98)00030-7MacKenzie I. S. Tanaka-Ishii K. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.; 2007. Text Entry Systems: Mobility, Accessibility, Universality.MARTI, U.-V., & BUNKE, H. (2001). USING A STATISTICAL LANGUAGE MODEL TO IMPROVE THE PERFORMANCE OF AN HMM-BASED CURSIVE HANDWRITING RECOGNITION SYSTEM. International Journal of Pattern Recognition and Artificial Intelligence, 15(01), 65-90. doi:10.1142/s0218001401000848Marti, U.-V., & Bunke, H. (2002). The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1), 39-46. doi:10.1007/s100320200071Martín-Albo D. Romero V. Toselli A. H. Vidal E. Multimodal computer-assisted transcription of text images at character-level interaction. Int. J. Pattern Recogn. Artif. Intell. 2012;26:1-19.Marzinkewitsch R. Operating Computer Algebra Systems by Hand-Printed Input. Proc. Int. Symp. Symbolic Algebr. Comput. (ISSAC). 1991.Mas, J., Llados, J., Sanchez, G., & Jorge, J. A. P. (2010). A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams. Pattern Recognition, 43(12), 4148-4164. doi:10.1016/j.patcog.2010.07.003Moyle M. Cockburn A. Analysing Mouse and Pen Flick Gestures. Proc. SIGCHI-NZ Symp. Comput.–Hum. Interact. (CHINZ). 2002.Nakayama Y. A Prototype Pen-Input Mathematical Formula Editor. Proc. AACE EdMedia 1993.Ogata J. Goto M. Speech Repair: Quick Error Correction Just by Using Selection Operation for Speech Input Interface. Proc. Eurospeech. 2005.Ortiz-Martínez D. Leiva L. A. Alabau V. Casacuberta F. Interactive Machine Translation using a Web-Based Architecture. Proc. Int. Conf. Intell. User Interfaces (IUI) 2010.Ortiz-Martínez D. Leiva L. A. Alabau V. García-Varea I. Casacuberta F. An Interactive Machine Translation System with Online Learning. Proc. Assoc. Comput. Linguist. (ACL). 2011.Michael Powers, V. (1973). Pen direction sequences in character recognition. Pattern Recognition, 5(4), 291-302. doi:10.1016/0031-3203(73)90022-8Raab F. Extremely efficient menu selection: Marking menus for the Flash platform. 2009. Available at http://www.betriebsraum.de/blog/2009/07/21/efficient-gesture-recognition-and-corner-finding-in-as3/ (retrieved on May 2012).Revuelta-Martínez A. Rodríguez L. García-Varea I. A Computer Assisted Speech Transcription System. Proc. Eur. Chap. Assoc. Comput. Linguist. (EACL). 2012.Revuelta-Martínez, A., Rodríguez, L., García-Varea, I., & Montero, F. (2013). Multimodal interaction for information retrieval using natural language. Computer Standards & Interfaces, 35(5), 428-441. doi:10.1016/j.csi.2012.11.002Rodríguez L. García-Varea I. Revuelta-Martínez A. Vidal E. A Multimodal Interactive Text Generation System. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI). 2010a.Rodríguez L. García-Varea I. Vidal E. Multi-Modal Computer Assisted Speech Transcription. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI) 2010b.Romero V. Leiva L. A. Toselli A. H. Vidal E. Interactive Multimodal Transcription of Text Images using a Web-Based Demo System. Proc. Int. Conf. Intell. User Interfaces (IUI). 2009a.Romero V. Toselli A. H. Vidal E. Using Mouse Feedback in Computer Assisted Transcription of Handwritten Text Images. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2009b.Romero V. Toselli A. H. Vidal E. Study of Different Interactive Editing Operations in an Assisted Transcription System. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011.Romero V. Toselli A. H. Vidal E. Vol. 80. Singapore: World Scientific Publishing Company; 2012. Multimodal Interactive Handwritten Text Transcription.Rubine, D. (1991). Specifying gestures by example. ACM SIGGRAPH Computer Graphics, 25(4), 329-337. doi:10.1145/127719.122753Rubine D. H. 1991b. The automatic recognition of gestures. Ph.D. Thesis, Carnegie Mellon University.Sánchez-Sáez R. Leiva L. A. Sánchez J. A. Benedí J. M. Interactive Predictive Parsing using a Web-Based Architecture. Proc. North Am. Chap. Assoc. Comput. Linguist. 2010.Saund E. Fleet D. Larner D. Mahoney J. Perceptually-Supported Image Editing of Text and Graphics. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2003.Shilman M. Tan D. S. Simard P. CueTIP: a Mixed-Initiative Interface for Correcting Handwriting Errors. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2006.Signer B. Kurmann U. Norrie M. C. igesture: A General Gesture Recognition Framework. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2007.Smithies S. Novins K. Arvo J. A handwriting-based equation editor. Proc. Conf. Graph. Interface (GI). 1999.Suhm, B., Myers, B., & Waibel, A. (2001). Multimodal error correction for speech user interfaces. ACM Transactions on Computer-Human Interaction, 8(1), 60-98. doi:10.1145/371127.371166Tappert C. C. Mosley P. H. Recent advances in pen computing. 2001. Technical Report 166, Pace University, available: http://support.csis.pace.edu.Toselli, A. H., Romero, V., Pastor, M., & Vidal, E. (2010). Multimodal interactive transcription of text images. Pattern Recognition, 43(5), 1814-1825. doi:10.1016/j.patcog.2009.11.019Toselli A. H. Vidal E. Casacuberta F. , editors. Berlin, Heidelberg, New York: Springer; 2011. Multimodal-Interactive Pattern Recognition and Applications.Tseng S. Fogg B. Credibility and computing technology. Commun. ACM 1999;42:39-44.Vatavu R.-D. Anthony L. Wobbrock J. O. Gestures as Point Clouds: A P Recognizer for User Interface Prototypes. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2012.Vertanen K. Kristensson P. O. Parakeet: A Continuous Speech Recognition System for Mobile Touch-Screen Devices. Proc. Int. Conf. Intell. User Interfaces (IUI) 2009.Vidal E. Rodríguez L. Casacuberta F. García-Varea I. Mach. Learn. Multimodal Interact., Lect. Notes Comput. Sci. Vol. 4892. Berlin, Heidelberg: Springer; 2008. Interactive Pattern Recognition.Wang X. Li J. Ao X. Wang G. Dai G. Multimodal Error Correction for Continuous Handwriting Recognition in Pen-Based User Interfaces. Proc. Int. Conf. Intell. User Interfaces (IUI). 2006.Wang L. Hu T. Liu P. Soong F. K. Efficient Handwriting Correction of Speech Recognition Errors with Template Constrained Posterior (TCP). Proc. INTERSPEECH 2008.Wobbrock J. O. Wilson A. D. Li Y. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2007.Wolf C. G. Morrel-Samuels P. The use of hand-drawn gestures for text editing. Int. J. Man–Mach. Stud. 1987;27:91-102.Zeleznik R. Miller T. Fluid Inking: Augmenting the Medium of Free-Form Inking with Gestures. Proc. Conf. Graph. Interface (GI). 2006.Yong Zhang, McCullough, C., Sullins, J. R., & Ross, C. R. (2010). Hand-Drawn Face Sketch Recognition by Humans and a PCA-Based Algorithm for Forensic Applications. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 40(3), 475-485. doi:10.1109/tsmca.2010.2041654Zhao S. Balakrishnan R. Simple vs. Compound Mark Hierarchical Marking Menus. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2004
    corecore