5,947 research outputs found

    Escritoire: A Multi-touch Desk with e-Pen Input for Capture, Management and Multimodal Interactive Transcription of Handwritten Documents

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-19390-8_53A large quantity of documents used every day are still handwritten. However, it is interesting to transform each of these documents into its digital version for managing, archiving and sharing. Here we present Escritoire, a multi-touch desk that allows the user to capture, transcribe and work with handwritten documents. The desktop is continuously monitored using two cameras. Whenever the user makes a specific hand gesture over a paper, Escritoire proceeds to take an image. Then, the capture is automatically preprocesses, obtaining as a result an improved representation. Finally, the text image is transcribed using automatic techniques and finally the transcription is displayed on Escritoire.This work was partially supported by the Spanish MEC under FPU scholarship (AP2010-0575), STraDA research project (TIN2012-37475-C02-01) and MITTRAL research project (TIN2009-14633-C03-01); the EU’s 7th Framework Programme under tranScriptorium grant agreement (FP7/2007-2013/600707).Martín-Albo Simón, D.; Romero Gómez, V.; Vidal Ruiz, E. (2015). Escritoire: A Multi-touch Desk with e-Pen Input for Capture, Management and Multimodal Interactive Transcription of Handwritten Documents. En Pattern Recognition and Image Analysis. Springer. 471-478. https://doi.org/10.1007/978-3-319-19390-8_53S471478Andrew, A.: Another efficient algorithm for convex hulls in two dimensions. Inf. Process. Lett. 9(5), 216–219 (1979)Bosch, V., Toselli, A.H., Vidal, E.: Statistical text line analysis in handwritten documents. In: Proceedings of ICFHR (2012)Eisenstein, J., Puerta, A.: Adaptation in automated user-interface design. In: Proceedings of International Conference on Intelligent User Interfaces (2000)Jelinek, F.: Statistical Methods for Speech Recognition. MIT Press, Cambridge (1998)Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME-J. Basic Eng. 82(Series D), 35–45 (1960)Keysers, D., Shafait, F., Breuel, T.M.: Document image zone classification - a simple high-performance approach. In: Proceedings of International Conference on Computer Vision Theory (2007)Kozielski, M., Forster, J., Ney, H.: Moment-based image normalization for handwritten text recognition. In: Proceedings of ICFHR (2012)Lampert, C.H., Braun, T., Ulges, A., Keysers, D., Breuel, T.M.: Oblivious document capture and real-time retrieval. In: International Workshop on Camera Based Document Analysis and Recognition (2005)Liang, J., Doermann, D., Li, H.: Camera based analysis of text and documents a survey. Int. J. Doc. Anal. Recogn. 7(2–3), 84–104 (2005)Liwicki, M., Rostanin, O., El-Neklawy, S.M., Dengel, A.: Touch & write: a multi-touch table with pen-input. In: Proceedings of International Workshop on Document Analysis Systems (2010)Marti, U.V., Bunke, H.: Text line segmentation and word recognition in a system for general writer independent handwriting recognition. In: Proceedings of ICDAR (2001)Martín-Albo, D., Romero, V., Toselli, A.H., Vidal, E.: Multimodal computer-assisted transcription of text images at character-level interaction. Int. J. Pattern Recogn. Artif. Intell. 26(5), 19 (2012)Martín-Albo, D., Romero, V., Vidal, E.: Interactive off-line handwritten text transcription using on-line handwritten text as feedback. In: Proceedings of ICDAR (2013)Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. B Cybern. 37(3), 311–324 (2007)Terry, M., Mynatt, E.D.: Recognizing creative needs in user interface design. In: Proceedings of C&C (2002)Toselli, A.H., Juan, A., Keysers, D., González, J., Salvador, I., Ney, H., Vidal, E., Casacuberta, F.: Integrated handwriting recognition and interpretation using finite-state models. Int. J. Pattern Recognit. Artif. Intell. 18(4), 519–539 (2004)Toselli, A.H., Romero, V., Pastor, M., Vidal, E.: Multimodal interactive transcription of text images. Pattern Recognit. 43(5), 1814–1825 (2010)Toselli, A.H., Romero, V., Vidal, E.: Computer assisted transcription of text images and multimodal interaction. In: Popescu-Belis, A., Stiefelhagen, R. (eds.) MLMI 2008. LNCS, vol. 5237, pp. 296–308. Springer, Heidelberg (2008)Wachs, J.P., Kolsch, M., Stern, H., Edan, Y.: Vision-based hand-gesture applications. Commun. ACM. 54(2), 60–71 (2011)Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: Proceedings of CHI (2009

    Multimodal Interactive Transcription of Handwritten Text Images

    Full text link
    En esta tesis se presenta un nuevo marco interactivo y multimodal para la transcripción de Documentos manuscritos. Esta aproximación, lejos de proporcionar la transcripción completa pretende asistir al experto en la dura tarea de transcribir. Hasta la fecha, los sistemas de reconocimiento de texto manuscrito disponibles no proporcionan transcripciones aceptables por los usuarios y, generalmente, se requiere la intervención del humano para corregir las transcripciones obtenidas. Estos sistemas han demostrado ser realmente útiles en aplicaciones restringidas y con vocabularios limitados (como es el caso del reconocimiento de direcciones postales o de cantidades numéricas en cheques bancarios), consiguiendo en este tipo de tareas resultados aceptables. Sin embargo, cuando se trabaja con documentos manuscritos sin ningún tipo de restricción (como documentos manuscritos antiguos o texto espontáneo), la tecnología actual solo consigue resultados inaceptables. El escenario interactivo estudiado en esta tesis permite una solución más efectiva. En este escenario, el sistema de reconocimiento y el usuario cooperan para generar la transcripción final de la imagen de texto. El sistema utiliza la imagen de texto y una parte de la transcripción previamente validada (prefijo) para proponer una posible continuación. Despues, el usuario encuentra y corrige el siguente error producido por el sistema, generando así un nuevo prefijo mas largo. Este nuevo prefijo, es utilizado por el sistema para sugerir una nueva hipótesis. La tecnología utilizada se basa en modelos ocultos de Markov y n-gramas. Estos modelos son utilizados aquí de la misma manera que en el reconocimiento automático del habla. Algunas modificaciones en la definición convencional de los n-gramas han sido necesarias para tener en cuenta la retroalimentación del usuario en este sistema.Romero Gómez, V. (2010). Multimodal Interactive Transcription of Handwritten Text Images [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8541Palanci

    CHARACTER-LEVEL INTERACTIONS IN MULTIMODAL COMPUTER-ASSISTED TRANSCRIPTION OF TEXT IMAGES

    Full text link
    HTR systems don't achieve acceptable results in unconstrained applications. Therefore, it is convenient to use a system that allows the user to cooperate in the most confortable way with the system to generate a correct transcription. In this paper, multimodal interaction at character-level is studied.Martín-Albo Simón, D. (2011). CHARACTER-LEVEL INTERACTIONS IN MULTIMODAL COMPUTER-ASSISTED TRANSCRIPTION OF TEXT IMAGES. http://hdl.handle.net/10251/11313Archivo delegad

    Contributions to Pen & Touch Human-Computer Interaction

    Full text link
    [EN] Computers are now present everywhere, but their potential is not fully exploited due to some lack of acceptance. In this thesis, the pen computer paradigm is adopted, whose main idea is to replace all input devices by a pen and/or the fingers, given that the origin of the rejection comes from using unfriendly interaction devices that must be replaced by something easier for the user. This paradigm, that was was proposed several years ago, has been only recently fully implemented in products, such as the smartphones. But computers are actual illiterates that do not understand gestures or handwriting, thus a recognition step is required to "translate" the meaning of these interactions to computer-understandable language. And for this input modality to be actually usable, its recognition accuracy must be high enough. In order to realistically think about the broader deployment of pen computing, it is necessary to improve the accuracy of handwriting and gesture recognizers. This thesis is devoted to study different approaches to improve the recognition accuracy of those systems. First, we will investigate how to take advantage of interaction-derived information to improve the accuracy of the recognizer. In particular, we will focus on interactive transcription of text images. Here the system initially proposes an automatic transcript. If necessary, the user can make some corrections, implicitly validating a correct part of the transcript. Then the system must take into account this validated prefix to suggest a suitable new hypothesis. Given that in such application the user is constantly interacting with the system, it makes sense to adapt this interactive application to be used on a pen computer. User corrections will be provided by means of pen-strokes and therefore it is necessary to introduce a recognizer in charge of decoding this king of nondeterministic user feedback. However, this recognizer performance can be boosted by taking advantage of interaction-derived information, such as the user-validated prefix. Then, this thesis focuses on the study of human movements, in particular, hand movements, from a generation point of view by tapping into the kinematic theory of rapid human movements and the Sigma-Lognormal model. Understanding how the human body generates movements and, particularly understand the origin of the human movement variability, is important in the development of a recognition system. The contribution of this thesis to this topic is important, since a new technique (which improves the previous results) to extract the Sigma-lognormal model parameters is presented. Closely related to the previous work, this thesis study the benefits of using synthetic data as training. The easiest way to train a recognizer is to provide "infinite" data, representing all possible variations. In general, the more the training data, the smaller the error. But usually it is not possible to infinitely increase the size of a training set. Recruiting participants, data collection, labeling, etc., necessary for achieving this goal can be time-consuming and expensive. One way to overcome this problem is to create and use synthetically generated data that looks like the human. We study how to create these synthetic data and explore different approaches on how to use them, both for handwriting and gesture recognition. The different contributions of this thesis have obtained good results, producing several publications in international conferences and journals. Finally, three applications related to the work of this thesis are presented. First, we created Escritorie, a digital desk prototype based on the pen computer paradigm for transcribing handwritten text images. Second, we developed "Gestures à Go Go", a web application for bootstrapping gestures. Finally, we studied another interactive application under the pen computer paradigm. In this case, we study how translation reviewing can be done more ergonomically using a pen.[ES] Hoy en día, los ordenadores están presentes en todas partes pero su potencial no se aprovecha debido al "miedo" que se les tiene. En esta tesis se adopta el paradigma del pen computer, cuya idea fundamental es sustituir todos los dispositivos de entrada por un lápiz electrónico o, directamente, por los dedos. El origen del rechazo a los ordenadores proviene del uso de interfaces poco amigables para el humano. El origen de este paradigma data de hace más de 40 años, pero solo recientemente se ha comenzado a implementar en dispositivos móviles. La lenta y tardía implantación probablemente se deba a que es necesario incluir un reconocedor que "traduzca" los trazos del usuario (texto manuscrito o gestos) a algo entendible por el ordenador. Para pensar de forma realista en la implantación del pen computer, es necesario mejorar la precisión del reconocimiento de texto y gestos. El objetivo de esta tesis es el estudio de diferentes estrategias para mejorar esta precisión. En primer lugar, esta tesis investiga como aprovechar información derivada de la interacción para mejorar el reconocimiento, en concreto, en la transcripción interactiva de imágenes con texto manuscrito. En la transcripción interactiva, el sistema y el usuario trabajan "codo con codo" para generar la transcripción. El usuario valida la salida del sistema proporcionando ciertas correcciones, mediante texto manuscrito, que el sistema debe tener en cuenta para proporcionar una mejor transcripción. Este texto manuscrito debe ser reconocido para ser utilizado. En esta tesis se propone aprovechar información contextual, como por ejemplo, el prefijo validado por el usuario, para mejorar la calidad del reconocimiento de la interacción. Tras esto, la tesis se centra en el estudio del movimiento humano, en particular del movimiento de las manos, utilizando la Teoría Cinemática y su modelo Sigma-Lognormal. Entender como se mueven las manos al escribir, y en particular, entender el origen de la variabilidad de la escritura, es importante para el desarrollo de un sistema de reconocimiento, La contribución de esta tesis a este tópico es importante, dado que se presenta una nueva técnica (que mejora los resultados previos) para extraer el modelo Sigma-Lognormal de trazos manuscritos. De forma muy relacionada con el trabajo anterior, se estudia el beneficio de utilizar datos sintéticos como entrenamiento. La forma más fácil de entrenar un reconocedor es proporcionar un conjunto de datos "infinito" que representen todas las posibles variaciones. En general, cuanto más datos de entrenamiento, menor será el error del reconocedor. No obstante, muchas veces no es posible proporcionar más datos, o hacerlo es muy caro. Por ello, se ha estudiado como crear y usar datos sintéticos que se parezcan a los reales. Las diferentes contribuciones de esta tesis han obtenido buenos resultados, produciendo varias publicaciones en conferencias internacionales y revistas. Finalmente, también se han explorado tres aplicaciones relaciones con el trabajo de esta tesis. En primer lugar, se ha creado Escritorie, un prototipo de mesa digital basada en el paradigma del pen computer para realizar transcripción interactiva de documentos manuscritos. En segundo lugar, se ha desarrollado "Gestures à Go Go", una aplicación web para generar datos sintéticos y empaquetarlos con un reconocedor de forma rápida y sencilla. Por último, se presenta un sistema interactivo real bajo el paradigma del pen computer. En este caso, se estudia como la revisión de traducciones automáticas se puede realizar de forma más ergonómica.[CA] Avui en dia, els ordinadors són presents a tot arreu i es comunament acceptat que la seva utilització proporciona beneficis. No obstant això, moltes vegades el seu potencial no s'aprofita totalment. En aquesta tesi s'adopta el paradigma del pen computer, on la idea fonamental és substituir tots els dispositius d'entrada per un llapis electrònic, o, directament, pels dits. Aquest paradigma postula que l'origen del rebuig als ordinadors prové de l'ús d'interfícies poc amigables per a l'humà, que han de ser substituïdes per alguna cosa més coneguda. Per tant, la interacció amb l'ordinador sota aquest paradigma es realitza per mitjà de text manuscrit i/o gestos. L'origen d'aquest paradigma data de fa més de 40 anys, però només recentment s'ha començat a implementar en dispositius mòbils. La lenta i tardana implantació probablement es degui al fet que és necessari incloure un reconeixedor que "tradueixi" els traços de l'usuari (text manuscrit o gestos) a alguna cosa comprensible per l'ordinador, i el resultat d'aquest reconeixement, actualment, és lluny de ser òptim. Per pensar de forma realista en la implantació del pen computer, cal millorar la precisió del reconeixement de text i gestos. L'objectiu d'aquesta tesi és l'estudi de diferents estratègies per millorar aquesta precisió. En primer lloc, aquesta tesi investiga com aprofitar informació derivada de la interacció per millorar el reconeixement, en concret, en la transcripció interactiva d'imatges amb text manuscrit. En la transcripció interactiva, el sistema i l'usuari treballen "braç a braç" per generar la transcripció. L'usuari valida la sortida del sistema donant certes correccions, que el sistema ha d'usar per millorar la transcripció. En aquesta tesi es proposa utilitzar correccions manuscrites, que el sistema ha de reconèixer primer. La qualitat del reconeixement d'aquesta interacció és millorada, tenint en compte informació contextual, com per exemple, el prefix validat per l'usuari. Després d'això, la tesi se centra en l'estudi del moviment humà en particular del moviment de les mans, des del punt de vista generatiu, utilitzant la Teoria Cinemàtica i el model Sigma-Lognormal. Entendre com es mouen les mans en escriure és important per al desenvolupament d'un sistema de reconeixement, en particular, per entendre l'origen de la variabilitat de l'escriptura. La contribució d'aquesta tesi a aquest tòpic és important, atès que es presenta una nova tècnica (que millora els resultats previs) per extreure el model Sigma- Lognormal de traços manuscrits. De forma molt relacionada amb el treball anterior, s'estudia el benefici d'utilitzar dades sintètiques per a l'entrenament. La forma més fàcil d'entrenar un reconeixedor és proporcionar un conjunt de dades "infinit" que representin totes les possibles variacions. En general, com més dades d'entrenament, menor serà l'error del reconeixedor. No obstant això, moltes vegades no és possible proporcionar més dades, o fer-ho és molt car. Per això, s'ha estudiat com crear i utilitzar dades sintètiques que s'assemblin a les reals. Les diferents contribucions d'aquesta tesi han obtingut bons resultats, produint diverses publicacions en conferències internacionals i revistes. Finalment, també s'han explorat tres aplicacions relacionades amb el treball d'aquesta tesi. En primer lloc, s'ha creat Escritorie, un prototip de taula digital basada en el paradigma del pen computer per realitzar transcripció interactiva de documents manuscrits. En segon lloc, s'ha desenvolupat "Gestures à Go Go", una aplicació web per a generar dades sintètiques i empaquetar-les amb un reconeixedor de forma ràpida i senzilla. Finalment, es presenta un altre sistema inter- actiu sota el paradigma del pen computer. En aquest cas, s'estudia com la revisió de traduccions automàtiques es pot realitzar de forma més ergonòmica.Martín-Albo Simón, D. (2016). Contributions to Pen & Touch Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68482TESI

    Contex-aware gestures for mixed-initiative text editings UIs

    Full text link
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in Interacting with computers following peer review. The version of record is available online at: http://dx.doi.org/10.1093/iwc/iwu019[EN] This work is focused on enhancing highly interactive text-editing applications with gestures. Concretely, we study Computer Assisted Transcription of Text Images (CATTI), a handwriting transcription system that follows a corrective feedback paradigm, where both the user and the system collaborate efficiently to produce a high-quality text transcription. CATTI-like applications demand fast and accurate gesture recognition, for which we observed that current gesture recognizers are not adequate enough. In response to this need we developed MinGestures, a parametric context-aware gesture recognizer. Our contributions include a number of stroke features for disambiguating copy-mark gestures from handwritten text, plus the integration of these gestures in a CATTI application. It becomes finally possible to create highly interactive stroke-based text-editing interfaces, without worrying to verify the user intent on-screen. We performed a formal evaluation with 22 e-pen users and 32 mouse users using a gesture vocabulary of 10 symbols. MinGestures achieved an outstanding accuracy (<1% error rate) with very high performance (<1 ms of recognition time). We then integrated MinGestures in a CATTI prototype and tested the performance of the interactive handwriting system when it is driven by gestures. Our results show that using gestures in interactive handwriting applications is both advantageous and convenient when gestures are simple but context-aware. Taken together, this work suggests that text-editing interfaces not only can be easily augmented with simple gestures, but also may substantially improve user productivity.This work has been supported by the European Commission through the 7th Framework Program (tranScriptorium: FP7- ICT-2011-9, project 600707 and CasMaCat: FP7-ICT-2011-7, project 287576). It has also been supported by the Spanish MINECO under grant TIN2012-37475-C02-01 (STraDa), and the Generalitat Valenciana under grant ISIC/2012/004 (AMIIS).Leiva, LA.; Alabau, V.; Romero Gómez, V.; Toselli, AH.; Vidal, E. (2015). Contex-aware gestures for mixed-initiative text editings UIs. Interacting with Computers. 27(6):675-696. https://doi.org/10.1093/iwc/iwu019S675696276Alabau V. Leiva L. A. Transcribing Handwritten Text Images with a Word Soup Game. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 2012.Alabau V. Rodríguez-Ruiz L. Sanchis A. Martínez-Gómez P. Casacuberta F. On Multimodal Interactive Machine Translation Using Speech Recognition. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011a.Alabau V. Sanchis A. Casacuberta F. Improving On-Line Handwritten Recognition using Translation Models in Multimodal Interactive Machine Translation. Proc. Assoc. Comput. Linguistics (ACL) 2011b.Alabau, V., Sanchis, A., & Casacuberta, F. (2014). Improving on-line handwritten recognition in interactive machine translation. Pattern Recognition, 47(3), 1217-1228. doi:10.1016/j.patcog.2013.09.035Anthony L. Wobbrock J. O. A Lightweight Multistroke Recognizer for User Interface Prototypes. Proc. Conf. Graph. Interface (GI). 2010.Anthony L. Wobbrock J. O. N-Protractor: a Fast and Accurate Multistroke Recognizer. Proc. Conf. Graph. Interface (GI) 2012.Anthony L. Vatavu R.-D. Wobbrock J. O. Understanding the Consistency of Users' Pen and Finger Stroke Gesture Articulation. Proc. Conf. Graph. Interface (GI). 2013.Appert C. Zhai S. Using Strokes as Command Shortcuts: Cognitive Benefits and Toolkit Support. Proc. SIGCHI Conf. Hum. Fact. Comput. Syst. (CHI) 2009.Bahlmann C. Haasdonk B. Burkhardt H. On-Line Handwriting Recognition with Support Vector Machines: A Kernel Approach. Proc. Int. Workshop Frontiers Handwriting Recognition (IWFHR). 2001.Bailly G. Lecolinet E. Nigay L. Flower Menus: a New Type of Marking Menu with Large Menu Breadth, within Groups and Efficient Expert Mode Memorization. Proc.Work. Conf. Adv. Vis. Interfaces (AVI) 2008.Balakrishnan R. Patel P. The PadMouse: Facilitating Selection and Spatial Positioning for the Non-Dominant Hand. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1998.Bau O. Mackay W. E. Octopocus: A Dynamic Guide for Learning Gesture-Based Command Sets. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2008.Belaid A. Haton J. A syntactic approach for handwritten formula recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:105-111.Bosch V. Bordes-Cabrera I. Munoz P. C. Hernández-Tornero C. Leiva L. A. Pastor M. Romero V. Toselli A. H. Vidal E. Transcribing a XVII Century Handwritten Botanical Specimen Book from Scratch. Proc. Int. Conf. Digital Access Textual Cultural Heritage (DATeCH). 2014.Buxton W. The natural language of interaction: a perspective on non-verbal dialogues. INFOR 1988;26:428-438.Cao X. Zhai S. Modeling Human Performance of Pen Stroke Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2007.Castro-Bleda M. J. España-Boquera S. Llorens D. Marzal A. Prat F. Vilar J. M. Zamora-Martinez F. Speech Interaction in a Multimodal Tool for Handwritten Text Transcription. Proc. Int. Conf. Multimodal Interfaces (ICMI) 2011.Connell S. D. Jain A. K. Template-based on-line character recognition. Pattern Recognition 2000;34:1-14.Costagliola G. Deufemia V. Polese G. Risi M. A Parsing Technique for Sketch Recognition Systems. Proc. 2004 IEEE Symp. Vis. Lang. Hum. Centric Comput. (VLHCC). 2004.Culotta, A., Kristjansson, T., McCallum, A., & Viola, P. (2006). Corrective feedback and persistent learning for information extraction. Artificial Intelligence, 170(14-15), 1101-1122. doi:10.1016/j.artint.2006.08.001Deepu V. Madhvanath S. Ramakrishnan A. Principal Component Analysis for Online Handwritten Character Recognition. Proc. Int. Conf. Pattern Recognition (ICPR). 2004.Delaye A. Sekkal R. Anquetil E. Continuous Marking Menus for Learning Cursive Pen-Based Gestures. Proc. Int. Conf. Intell. User Interfaces (IUI) 2011.Dimitriadis Y. Coronado J. Towards an art-based mathematical editor that uses on-line handwritten symbol recognition. Pattern Recognition 1995;8:807-822.El Meseery M. El Din M. F. Mashali S. Fayek M. Darwish N. Sketch Recognition Using Particle Swarm Algorithms. Proc. 16th IEEE Int. Conf. Image Process. (ICIP). 2009.Goldberg D. Goodisman A. Stylus User Interfaces for Manipulating Text. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 1991.Goldberg D. Richardson C. Touch-Typing with a Stylus. Proc. INTERCHI'93 Conf. Hum. Factors Comput. Syst. 1993.Stevens, M. E. (1968). Selected pattern recognition projects in Europe. Pattern Recognition, 1(2), 103-118. doi:10.1016/0031-3203(68)90002-2Hardock G. Design Issues for Line Driven Text Editing/ Annotation Systems. Proc. Conf. Graph. Interface (GI). 1991.Hardock G. Kurtenbach G. Buxton W. A Marking Based Interface for Collaborative Writing. Proc.ACM Symp. User Interface Softw. Technol. (UIST) 1993.Hinckley K. Baudisch P. Ramos G. Guimbretiere F. Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2005.Hong J. I. Landay J. A. SATIN: A Toolkit for Informal Ink-Based Applications. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2000.Horvitz E. Principles of Mixed-Initiative User Interfaces. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Huerst W. Yang J. Waibel A. Interactive Error Repair for an Online Handwriting Interface. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Jelinek F. Cambridge, Massachusetts: MIT Press; 1998. Statistical Methods for Speech Recognition.Johansson S. Atwell E. Garside R. Leech G. The Tagged LOB Corpus, User's Manual. Norwegian Computing Center for the Humanities. 1996.Karat C.-M. Halverson C. Horn D. Karat J. Patterns of Entry and Correction in Large Vocabulary Continuous Speech Recognition Systems. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 1999.Kerrick, D. D., & Bovik, A. C. (1988). Microprocessor-based recognition of handprinted characters from a tablet input. Pattern Recognition, 21(5), 525-537. doi:10.1016/0031-3203(88)90011-8Koschinski M. Winkler H. Lang M. Segmentation and Recognition of Symbols within Handwritten Mathematical Expressions. Proc. IEEE Int. Conf. Acoustics Speech Signal Process. (ICASSP). 1995.Kosmala A. Rigoll G. On-Line Handwritten Formula Recognition Using Statistical Methods. Proc. Int. Conf. Pattern Recognition (ICPR) 1998.Kristensson P. O. Discrete and continuous shape writing for text entry and control. 2007. Ph.D. Thesis, Linköping University, Sweden.Kristensson P. O. Denby L. C. Text Entry Performance of State of the Art Unconstrained Handwriting Recognition: a Longitudinal User Study. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2009.Kristensson P. O. Denby L. C. Continuous Recognition and Visualization of Pen Strokes and Touch-Screen Gestures. Proc. Eighth Eurograph. Symp. Sketch-Based Interfaces Model. (SBIM) 2011.Kristensson P. O. Zhai S. SHARK2: A Large Vocabulary Shorthand Writing System for Pen-Based Computers. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2004.Kurtenbach G. P. The design and evaluation of marking menus. 1991. Ph.D. Thesis, University of Toronto.Kurtenbach G. P. Buxton W. Issues in Combining Marking and Direct Manipulation Techniques. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 1991.Kurtenbach G. Buxton W. User Learning and Performance with Marking Menus. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA) 1994.Kurtenbach, G., Sellen, A., & Buxton, W. (1993). An Empirical Evaluation of Some Articulatory and Cognitive Aspects of Marking Menus. Human-Computer Interaction, 8(1), 1-23. doi:10.1207/s15327051hci0801_1LaLomia M. User Acceptance of Handwritten Recognition Accuracy. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 1994.Leiva L. A. Romero V. Toselli A. H. Vidal E. Evaluating an Interactive–Predictive Paradigm on Handwriting Transcription: A Case Study and Lessons Learned. Proc. 35th Annu. IEEE Comput. Softw. Appl. Conf. (COMPSAC) 2011.Leiva L. A. Alabau V. Vidal E. Error-Proof, High-Performance, and Context-Aware Gestures for Interactive Text Edition. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2013.Li Y. Protractor: A Fast and Accurate Gesture Recognizer. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 2010.Li W. Hammond T. Using Scribble Gestures to Enhance Editing Behaviors of Sketch Recognition Systems. Proc. Extended Abstr. Hum. Factors Comput. Syst. (CHI EA). 2012.Liao C. Guimbretière F. Hinckley K. Hollan J. Papiercraft: a gesture-based command system for interactive paper. ACM Trans. Comput.–Hum. Interaction (TOCHI) 2008;14:18:1-18:27.Liu P. Soong F. K. Word Graph Based Speech Rcognition Error Correction by Handwriting Input. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2006.Long A. Landay J. Rowe L. Implications for a Gesture Design Tool. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI) 1999.Long A. C. Jr. Landay J. A. Rowe L. A. Michiels J. Visual Similarity of Pen Gestures. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI). 2000.MacKenzie, I. S., & Chang, L. (1999). A performance comparison of two handwriting recognizers. Interacting with Computers, 11(3), 283-297. doi:10.1016/s0953-5438(98)00030-7MacKenzie I. S. Tanaka-Ishii K. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.; 2007. Text Entry Systems: Mobility, Accessibility, Universality.MARTI, U.-V., & BUNKE, H. (2001). USING A STATISTICAL LANGUAGE MODEL TO IMPROVE THE PERFORMANCE OF AN HMM-BASED CURSIVE HANDWRITING RECOGNITION SYSTEM. International Journal of Pattern Recognition and Artificial Intelligence, 15(01), 65-90. doi:10.1142/s0218001401000848Marti, U.-V., & Bunke, H. (2002). The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1), 39-46. doi:10.1007/s100320200071Martín-Albo D. Romero V. Toselli A. H. Vidal E. Multimodal computer-assisted transcription of text images at character-level interaction. Int. J. Pattern Recogn. Artif. Intell. 2012;26:1-19.Marzinkewitsch R. Operating Computer Algebra Systems by Hand-Printed Input. Proc. Int. Symp. Symbolic Algebr. Comput. (ISSAC). 1991.Mas, J., Llados, J., Sanchez, G., & Jorge, J. A. P. (2010). A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams. Pattern Recognition, 43(12), 4148-4164. doi:10.1016/j.patcog.2010.07.003Moyle M. Cockburn A. Analysing Mouse and Pen Flick Gestures. Proc. SIGCHI-NZ Symp. Comput.–Hum. Interact. (CHINZ). 2002.Nakayama Y. A Prototype Pen-Input Mathematical Formula Editor. Proc. AACE EdMedia 1993.Ogata J. Goto M. Speech Repair: Quick Error Correction Just by Using Selection Operation for Speech Input Interface. Proc. Eurospeech. 2005.Ortiz-Martínez D. Leiva L. A. Alabau V. Casacuberta F. Interactive Machine Translation using a Web-Based Architecture. Proc. Int. Conf. Intell. User Interfaces (IUI) 2010.Ortiz-Martínez D. Leiva L. A. Alabau V. García-Varea I. Casacuberta F. An Interactive Machine Translation System with Online Learning. Proc. Assoc. Comput. Linguist. (ACL). 2011.Michael Powers, V. (1973). Pen direction sequences in character recognition. Pattern Recognition, 5(4), 291-302. doi:10.1016/0031-3203(73)90022-8Raab F. Extremely efficient menu selection: Marking menus for the Flash platform. 2009. Available at http://www.betriebsraum.de/blog/2009/07/21/efficient-gesture-recognition-and-corner-finding-in-as3/ (retrieved on May 2012).Revuelta-Martínez A. Rodríguez L. García-Varea I. A Computer Assisted Speech Transcription System. Proc. Eur. Chap. Assoc. Comput. Linguist. (EACL). 2012.Revuelta-Martínez, A., Rodríguez, L., García-Varea, I., & Montero, F. (2013). Multimodal interaction for information retrieval using natural language. Computer Standards & Interfaces, 35(5), 428-441. doi:10.1016/j.csi.2012.11.002Rodríguez L. García-Varea I. Revuelta-Martínez A. Vidal E. A Multimodal Interactive Text Generation System. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI). 2010a.Rodríguez L. García-Varea I. Vidal E. Multi-Modal Computer Assisted Speech Transcription. Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal Interact. (ICMI-MLMI) 2010b.Romero V. Leiva L. A. Toselli A. H. Vidal E. Interactive Multimodal Transcription of Text Images using a Web-Based Demo System. Proc. Int. Conf. Intell. User Interfaces (IUI). 2009a.Romero V. Toselli A. H. Vidal E. Using Mouse Feedback in Computer Assisted Transcription of Handwritten Text Images. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2009b.Romero V. Toselli A. H. Vidal E. Study of Different Interactive Editing Operations in an Assisted Transcription System. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2011.Romero V. Toselli A. H. Vidal E. Vol. 80. Singapore: World Scientific Publishing Company; 2012. Multimodal Interactive Handwritten Text Transcription.Rubine, D. (1991). Specifying gestures by example. ACM SIGGRAPH Computer Graphics, 25(4), 329-337. doi:10.1145/127719.122753Rubine D. H. 1991b. The automatic recognition of gestures. Ph.D. Thesis, Carnegie Mellon University.Sánchez-Sáez R. Leiva L. A. Sánchez J. A. Benedí J. M. Interactive Predictive Parsing using a Web-Based Architecture. Proc. North Am. Chap. Assoc. Comput. Linguist. 2010.Saund E. Fleet D. Larner D. Mahoney J. Perceptually-Supported Image Editing of Text and Graphics. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2003.Shilman M. Tan D. S. Simard P. CueTIP: a Mixed-Initiative Interface for Correcting Handwriting Errors. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2006.Signer B. Kurmann U. Norrie M. C. igesture: A General Gesture Recognition Framework. Proc. Int. Conf. Doc. Anal. Recogn. (ICDAR) 2007.Smithies S. Novins K. Arvo J. A handwriting-based equation editor. Proc. Conf. Graph. Interface (GI). 1999.Suhm, B., Myers, B., & Waibel, A. (2001). Multimodal error correction for speech user interfaces. ACM Transactions on Computer-Human Interaction, 8(1), 60-98. doi:10.1145/371127.371166Tappert C. C. Mosley P. H. Recent advances in pen computing. 2001. Technical Report 166, Pace University, available: http://support.csis.pace.edu.Toselli, A. H., Romero, V., Pastor, M., & Vidal, E. (2010). Multimodal interactive transcription of text images. Pattern Recognition, 43(5), 1814-1825. doi:10.1016/j.patcog.2009.11.019Toselli A. H. Vidal E. Casacuberta F. , editors. Berlin, Heidelberg, New York: Springer; 2011. Multimodal-Interactive Pattern Recognition and Applications.Tseng S. Fogg B. Credibility and computing technology. Commun. ACM 1999;42:39-44.Vatavu R.-D. Anthony L. Wobbrock J. O. Gestures as Point Clouds: A P Recognizer for User Interface Prototypes. Proc. Int. Conf. Multimodal Interfaces (ICMI). 2012.Vertanen K. Kristensson P. O. Parakeet: A Continuous Speech Recognition System for Mobile Touch-Screen Devices. Proc. Int. Conf. Intell. User Interfaces (IUI) 2009.Vidal E. Rodríguez L. Casacuberta F. García-Varea I. Mach. Learn. Multimodal Interact., Lect. Notes Comput. Sci. Vol. 4892. Berlin, Heidelberg: Springer; 2008. Interactive Pattern Recognition.Wang X. Li J. Ao X. Wang G. Dai G. Multimodal Error Correction for Continuous Handwriting Recognition in Pen-Based User Interfaces. Proc. Int. Conf. Intell. User Interfaces (IUI). 2006.Wang L. Hu T. Liu P. Soong F. K. Efficient Handwriting Correction of Speech Recognition Errors with Template Constrained Posterior (TCP). Proc. INTERSPEECH 2008.Wobbrock J. O. Wilson A. D. Li Y. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. Proc. ACM Symp. User Interface Softw. Technol. (UIST). 2007.Wolf C. G. Morrel-Samuels P. The use of hand-drawn gestures for text editing. Int. J. Man–Mach. Stud. 1987;27:91-102.Zeleznik R. Miller T. Fluid Inking: Augmenting the Medium of Free-Form Inking with Gestures. Proc. Conf. Graph. Interface (GI). 2006.Yong Zhang, McCullough, C., Sullins, J. R., & Ross, C. R. (2010). Hand-Drawn Face Sketch Recognition by Humans and a PCA-Based Algorithm for Forensic Applications. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 40(3), 475-485. doi:10.1109/tsmca.2010.2041654Zhao S. Balakrishnan R. Simple vs. Compound Mark Hierarchical Marking Menus. Proc. ACM Symp. User Interface Softw. Technol. (UIST) 2004

    Multimodality, interactivity, and crowdsourcing for document transcription

    Full text link
    This is the peer reviewed version of the following article: Granell, Emilio, Romero, Verónica, Martínez-Hinarejos, Carlos-D.. (2018). Multimodality, interactivity, and crowdsourcing for document transcription.Computational Intelligence, 34, 2, 398-419. DOI: 10.1111/coin.12169, which has been published in final form at http://doi.org/10.1111/coin.12169.. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.[EN] Knowledge mining from documents usually use document engineering techniques that allow the user to access the information contained in documents of interest. In this framework, transcription may provide efficient access to the contents of handwritten documents. Manual transcription is a time-consuming task that can be sped up by using different mechanisms. A first possibility is employing state-of-the-art handwritten text recognition systems to obtain an initial draft transcription that can be manually amended. A second option is employing crowdsourcing to obtain a massive but not error-free draft transcription. In this case, when collaborators employ mobile devices, speech dictation can be used as a transcription source, and speech and handwritten text recognition can be fused to provide a better draft transcription, which can be amended with even less effort. A final option is using interactive assistive frameworks, where the automatic system that provides the draft transcription and the transcriber cooperate to generate the final transcription. The novel contributions presented in this work include the study of the data fusion on a multimodal crowdsourcing framework and its integration with an interactive system. The use of the proposed solutions reduces the required transcription effort and optimizes the overall performance and usability, allowing for a better transcription process.projects READ, Grant/Award Number: 674943; (European Union's H2020); Smart Ways, Grant/Award Number: RTC-2014-1466-4; (MINECO); CoMUN-HaT, Grant/Award Number: TIN2015-70924-C2-1-R; (MINECO / FEDER)Granell, E.; Romero, V.; Martínez-Hinarejos, C. (2018). Multimodality, interactivity, and crowdsourcing for document transcription. Computational Intelligence. 34(2):398-419. https://doi.org/10.1111/coin.12169S39841934

    Character-level interaction in multimodal computer-assisted transcription of text images

    Full text link
    “The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-21257-4_85To date, automatic handwriting text recognition systems are far from being perfect and heavy human intervention is often required to check and correct the results of such systems. As an alternative, an interactive framework that integrates the human knowledge into the transcription process has been presented in previous works. In this work, multimodal interaction at character-level is studied. Until now, multimodal interaction had been studied only at whole-word level. However, character-level pen-stroke interactions may lead to more ergonomic and friendly interfaces. Empirical tests show that this approach can save significant amounts of user effort with respect to both fully manual transcription and non-interactive post-editing correction.Work supported by the Spanish Government (MICINN and “Plan E”) under the MITTRAL (TIN2009-14633-C03-01) research project and under the research programme Consolider Ingenio 2010: MIPRCV (CSD2007-00018), and by the Generalitat Valenciana under grant Prometeo/2009/014.Martín-Albo Simón, D.; Romero Gómez, V.; Toselli ., AH.; Vidal, E. (2011). Character-level interaction in multimodal computer-assisted transcription of text images. En Pattern Recognition and Image Analysis. Springer Verlag (Germany). 684-691. https://doi.org/10.1007/978-3-642-21257-4S68469

    Image speech combination for interactive computer assisted transcription of handwritten documents

    Full text link
    [EN] Handwritten document transcription aims to obtain the contents of a document to provide efficient information access to, among other, digitised historical documents. The increasing number of historical documents published by libraries and archives makes this an important task. In this context, the use of image processing and understanding techniques in conjunction with assistive technologies reduces the time and human effort required for obtaining the final perfect transcription. The assistive transcription system proposes a hypothesis, usually derived from a recognition process of the handwritten text image. Then, the professional transcriber feedback can be used to obtain an improved hypothesis and speed-up the final transcription. In this framework, a speech signal corresponding to the dictation of the handwritten text can be used as an additional source of information. This multimodal approach, that combines the image of the handwritten text with the speech of the dictation of its contents, could make better the hypotheses (initial and improved) offered to the transcriber. In this paper we study the feasibility of a multimodal interactive transcription system for an assistive paradigm known as Computer Assisted Transcription of Text Images. Different techniques are tested for obtaining the multimodal combination in this framework. The use of the proposed multimodal approach reveals a significant reduction of transcription effort with some multimodal combination techniques, allowing for a faster transcription process.Work partially supported by projects READ-674943 (European Union's H2020), SmartWays-RTC-2014-1466-4 (MINECO, Spain), and CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER), and by Generalitat Valenciana (GVA), Spain under reference PROMETEOII/2014/030.Granell, E.; Romero, V.; Martínez-Hinarejos, C. (2019). Image speech combination for interactive computer assisted transcription of handwritten documents. Computer Vision and Image Understanding. 180:74-83. https://doi.org/10.1016/j.cviu.2019.01.009S748318
    corecore