5 research outputs found

    Combining multiple translation systems for spoken language understanding portability

    Full text link
    [EN] We are interested in the problem of learning Spoken Language Understanding (SLU) models for multiple target languages. Learning such models requires annotated corpora, and porting to different languages would require corpora with parallel text translation and semantic annotations. In this paper we investigate how to learn a SLU model in a target language starting from no target text and no semantic annotation. Our proposed algorithm is based on the idea of exploiting the diversity (with regard to performance and coverage) of multiple translation systems to transfer statistically stable word-to-concept mappings in the case of the romance language pair, French and Spanish. Each translation system performs differently at the lexical level (wrt BLEU). The best translation system performances for the semantic task are gained from their combination at different stages of the portability methodology. We have evaluated the portability algorithms on the French MEDIA corpus, using French as the source language and Spanish as the target language. The experiments show the effectiveness of the proposed methods with respect to the source language SLU baseline.This work is partially supported by the Spanish MICINN under contract TIN2011-28169-C05-01, and by the Vic. d'Investigacio of the UPV under contracts PAID-00-09 and PAID-06-10 The author work was partially funded by FP7 PORTDIAL project n.296170GarcĂ­a-Granada, F.; Hurtado Oliver, LF.; Segarra Soriano, E.; SanchĂ­s Arnal, E.; Riccardi, G. (2012). Combining multiple translation systems for spoken language understanding portability. IEEE. 194-198. https://doi.org/10.1109/SLT.2012.642422119419

    ASLP-MULAN: Audio speech and language processing for multimedia analytics

    Get PDF
    Our intention is generating the right mixture of audio, speech and language technologies with big data ones. Some audio, speech and language automatic technologies are available or gaining enough degree of maturity as to be able to help to this objective: automatic speech transcription, query by spoken example, spoken information retrieval, natural language processing, unstructured multimedia contents transcription and description, multimedia files summarization, spoken emotion detection and sentiment analysis, speech and text understanding, etc. They seem to be worthwhile to be joined and put at work on automatically captured data streams coming from several sources of information like YouTube, Facebook, Twitter, online newspapers, web search engines, etc. to automatically generate reports that include both scientific based scores and subjective but relevant summarized statements on the tendency analysis and the perceived satisfaction of a product, a company or another entity by the general population

    ASLP-MULAN: Procesado de audio, habla y lenguaje para análisis de información multimedia

    Get PDF
    [EN] Our intention is generating the right mixture of audio, speech and language technologies with big data ones. Some audio, speech and language automatic technologies are available or gaining enough degree of maturity as to be able to help to this objective: automatic speech transcription, query by spoken example, spoken information retrieval, natural language processing, unstructured multimedia contents transcription and description, multimedia files summarization, spoken emotion detection and sentiment analysis, speech and text understanding, etc. They seem to be worthwhile to be joined and put at work on automatically captured data streams coming from several sources of information like YouTube, Facebook, Twitter, online newspapers, web search engines, etc. to automatically generate reports that include both scientific based scores and subjective but relevant summarized statements on the tendency analysis and the perceived satisfaction of a product, a company or another entity by the general population.[ES] Nuestra intención es generar la mezcla ideal de tecnologías del audio, el habla y el lenguaje con las de big data. Algunas tecnologías automáticas del procesado de audio, habla y lenguaje están adquiriendo suficiente grado de madurez para ser capaces de ayudar a este objetivo: transcripción automática del habla, métodos de búsqueda por habla, recuperación de documentos hablados, procesado del lenguaje natural, transcripción y descripción de contenidos multimedia no estructurados, resumen de ficheros multimedia, detección de emoción en el habla y análisis de sentimientos, comprensión de texto y habla, etc. Parece que merece la pena unirlos y ponerlos a trabajar sobre secuencias de datos obtenidos automáticamente procedentes de diversas fuentes de información como YouTube, Facebook, Twitter, periódicos digitales, buscadores de internet, etc. para generar informes que incluyan tanto puntuaciones basadas en análisis cuantitativo como expresiones resumidas subjetivas pero significativas sobre el análisis de tendencias y la satisfacción percibida sobre un producto, una empresa u otra entidad.This Project is founded by the “Ministerio de Economía y Competitividad” TIN2014-54288-C4 and there are four reseach groups involved: ELiRF (Universitat Politècnica de València), ViVoLab (Universidad de Zaragoza), SPIN (Universidad del Pais Vasco), GTH (Universidad Politécnica de Madrid).Ferreiros Lopez, J.; Pardo Muñoz, JM.; Hurtado Oliver, LF.; Segarra Soriano, E.; Ortega Giménez, A.; Lleida, E.; Torres, MI.... (2016). ASLP-MULAN: Audio speech and language processing for multimedia analytics. Procesamiento del Lenguaje Natural. (57):147-150. http://hdl.handle.net/10251/84803S1471505

    A Train-on-Target Strategy for Multilingual Spoken Language Understanding

    Full text link
    [EN] There are two main strategies to adapt a Spoken Language Understanding system to deal with languages different from the original (source) language: test-on-source and train-on-target. In the train-ontarget approach, a new understanding model is trained in the target language, which is the language in which the test utterances are pronounced. To do this, a segmented and semantically labeled training set for each new language is needed. In this work, we use several general-purpose translators to obtain the translation of the training set and we apply an alignment process to automatically segment the training sentences. We have applied this train-on-target approach to estimate the understanding module of a Spoken Dialog System for the DIHANA task, which consists of an information system about train timetables and fares in Spanish. We present an evaluation of our train-on-target multilingual approach for two target languages, French and EnglishThis work has been partially funded by the project ASLP-MULAN: Audio, Speech and Language Processing for Multimedia Analytics (MEC TIN2014-54288-C4-3-R).García-Granada, F.; Segarra Soriano, E.; Millán, C.; Sanchís Arnal, E.; Hurtado Oliver, LF. (2016). A Train-on-Target Strategy for Multilingual Spoken Language Understanding. Lecture Notes in Computer Science. 10077:224-233. https://doi.org/10.1007/978-3-319-49169-1_22S22423310077Benedí, J.M., Lleida, E., Varona, A., Castro, M.J., Galiano, I., Justo, R., López de Letona, I., Miguel, A.: Design and acquisition of a telephone spontaneous speech dialogue corpus in Spanish: DIHANA. In: LREC 2006, pp. 1636–1639 (2006)Calvo, M., Hurtado, L.-F., García, F., Sanchís, E.: A Multilingual SLU system based on semantic decoding of graphs of words. In: Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., Ramos Castro, D. (eds.) IberSPEECH 2012. CCIS, vol. 328, pp. 158–167. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35292-8_17Calvo, M., Hurtado, L.F., Garca, F., Sanchis, E., Segarra, E.: Multilingual spoken language understanding using graphs and multiple translations. Comput. Speech Lang. 38, 86–103 (2016)Dinarelli, M., Moschitti, A., Riccardi, G.: Concept segmentation and labeling for conversational speech. In: Interspeech, Brighton, UK (2009)Esteve, Y., Raymond, C., Bechet, F., Mori, R.D.: Conceptual decoding for spoken dialog systems. In: Proceedings of EuroSpeech 2003, pp. 617–620 (2003)García, F., Hurtado, L., Segarra, E., Sanchis, E., Riccardi, G.: Combining multiple translation systems for spoken language understanding portability. In: Proceedings of IEEE Workshop on Spoken Language Technology (SLT), pp. 282–289 (2012)Hahn, S., Dinarelli, M., Raymond, C., Lefèvre, F., Lehnen, P., De Mori, R., Moschitti, A., Ney, H., Riccardi, G.: Comparing stochastic approaches to spoken language understanding in multiple languages. IEEE Trans. Audio Speech Lang. Process. 6(99), 1569–1583 (2010)He, Y., Young, S.: A data-driven spoken language understanding system. In: Proceedings of ASRU 2003, pp. 583–588 (2003)Hurtado, L., Segarra, E., García, F., Sanchis, E.: Language understanding using n-multigram models. In: Vicedo, J.L., Martínez-Barco, P., Muńoz, R., Saiz Noeda, M. (eds.) EsTAL 2004. LNCS (LNAI), vol. 3230, pp. 207–219. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-30228-5_19Jabaian, B., Besacier, L., Lefèvre, F.: Comparison and combination of lightly supervised approaches for language portability of a spoken language understanding system. IEEE Trans. Audio Speech Lang. Process. 21(3), 636–648 (2013)Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of ACL Demonstration Session, pp. 177–180 (2007)Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: International Conference on Machine Learning, pp. 282–289. Citeseer (2001)Lefèvre, F.: Dynamic Bayesian networks and discriminative classifiers for multi-stage semantic interpretation. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2007, vol. 4, pp. 13–16. IEEE (2007)Ortega, L., Galiano, I., Hurtado, L.F., Sanchis, E., Segarra, E.: A statistical segment-based approach for spoken language understanding. In: Proceedings of InterSpeech 2010, Makuhari, Chiba, Japan, pp. 1836–1839 (2010)Segarra, E., Sanchis, E., Galiano, M., García, F., Hurtado, L.: Extracting semantic information through automatic learning techniques. IJPRAI 16(3), 301–307 (2002)Servan, C., Camelin, N., Raymond, C., Bchet, F., Mori, R.D.: On the use of machine translation for spoken language understanding portability. In: Proceedings of ICASSP 2010, pp. 5330–5333 (2010)Tür, G., Mori, R.D.: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, 1st edn. Wiley, Hoboken (2011

    Combining multiple translation systems for Spoken Language Understanding portability

    No full text
    [EN] We are interested in the problem of learning Spoken Language Understanding (SLU) models for multiple target languages. Learning such models requires annotated corpora, and porting to different languages would require corpora with parallel text translation and semantic annotations. In this paper we investigate how to learn a SLU model in a target language starting from no target text and no semantic annotation. Our proposed algorithm is based on the idea of exploiting the diversity (with regard to performance and coverage) of multiple translation systems to transfer statistically stable word-to-concept mappings in the case of the romance language pair, French and Spanish. Each translation system performs differently at the lexical level (wrt BLEU). The best translation system performances for the semantic task are gained from their combination at different stages of the portability methodology. We have evaluated the portability algorithms on the French MEDIA corpus, using French as the source language and Spanish as the target language. The experiments show the effectiveness of the proposed methods with respect to the source language SLU baseline.This work is partially supported by the Spanish MICINN under contract TIN2011-28169-C05-01, and by the Vic. d'Investigacio of the UPV under contracts PAID-00-09 and PAID-06-10 The author work was partially funded by FP7 PORTDIAL project n.296170GarcĂ­a-Granada, F.; Hurtado Oliver, LF.; Segarra Soriano, E.; SanchĂ­s Arnal, E.; Riccardi, G. (2012). Combining multiple translation systems for spoken language understanding portability. IEEE. 194-198. https://doi.org/10.1109/SLT.2012.642422119419
    corecore