356 research outputs found

    Enfermedad Pulmonar Obstructiva

    Get PDF
    Las enfermedades crónicas no transmisibles (ECNT), no se transmiten de persona a persona. Son de larga duración y por lo general evolucionan lentamente. Los cuatro tipos principales de enfermedades no transmisibles son las enfermedades cardiovasculares (como ataques cardiacos y accidentes cerebrovasculares), el cáncer, las enfermedades respiratorias crónicas (como la enfermedad pulmonar obstructiva crónica y el asma) y la diabetes. La tendencia de la mortalidad por las ECNT en la última década se ha mantenido constante o ha aumentado. La enfermedad pulmonar obstructiva (EPOC), es un trastorno pulmonar que se caracteriza por la existencia de una obstrucción de las vías respiratorias generalmente progresiva e irreversible. Se encuentra una mayor incidencia en personas expuestas al humo del tabaco y produce como síntoma principal una disminución de la capacidad respiratoria, que avanza lentamente con el paso de los años y ocasiona un deterioro considerable en la calidad de vida de las personas afectadas, pudiendo ocasionar una muerte prematura. Es la responsable de más del 60% de las muertes, 80% de las cuales ocurren en países de bajos y medianos ingresos. La prevalencia mundial de la EPOC oscila entre el 5 y el 10%; ha aumentado en las últimas décadas, es más frecuente en hombres que en mujeres y aumenta con la edad. Es una de las principales causas de muerte y discapacidad en todo el mundo y tiene un impacto físico y emocional significativo en las personas que la padecen. La mortalidad global de la EPOC estaba en la sexta posición con 2,2 millones de muertes, se prevé una tendencia en ascenso hasta la tercera causa de muerte en el 2020. Según las estimaciones más frecuentes de la Organización Mundial de la Salud (OMS) actualmente más de 210 millones de personas conviven con la EPOC y se estima que pueda convertirse en la tercera causa de muerte mundial en 203

    Advanced Speech Communication System for Deaf People

    Get PDF
    This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver’s License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from sign writing composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example based strategy, a rule based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaire

    Factored Translation Models for improving a Speech into Sign Language Translation System

    Get PDF
    This paper proposes the use of Factored Translation Models (FTMs) for improving a Speech into Sign Language Translation System. These FTMs allow incorporating syntactic-semantic information during the translation process. This new information permits to reduce significantly the translation error rate. This paper also analyses different alternatives for dealing with the non-relevant words. The speech into sign language translation system has been developed and evaluated in a specific application domain: the renewal of Identity Documents and Driver’s License. The translation system uses a phrase-based translation system (Moses). The evaluation results reveal that the BLEU (BiLingual Evaluation Understudy) has improved from 69.1% to 73.9% and the mSER (multiple references Sign Error Rate) has been reduced from 30.6% to 24.8%

    A Bayesian Networks Approach for Dialog Modeling: The Fusion BN

    Get PDF
    Bayesian networks, BNs, are suitable for mixed-initiative dialog modeling allowing a more flexible and natural spoken interaction. This solution can be applied to identify the intention of the user considering the concepts extracted from the last utterance and the dialog context. Subsequently, in order to make a correct decision regarding how the dialog should continue, unnecessary, missing, wrong, optional and required concepts have to be detected according to the inferred goals. This information is useful to properly drive the dialog prompting for missing concepts, clarifying for wrong concepts, ignoring unnecessary concepts and retrieving those required and optional. This paper presents a novel BNs approach where a single BN is obtained from N goal-specific BNs through a fusion process. The new fusion BN enables a single concept analysis which is more consistent with the whole dialog context

    Automatic Understanding of ATC Speech: Study of Prospectives and Field Experiments for Several Controller Positions

    Get PDF
    Although there has been a lot of interest in recognizing and understanding air traffic control (ATC) speech, none of the published works have obtained detailed field data results. We have developed a system able to identify the language spoken and recognize and understand sentences in both Spanish and English. We also present field results for several in-tower controller positions. To the best of our knowledge, this is the first time that field ATC speech (not simulated) is captured, processed, and analyzed. The use of stochastic grammars allows variations in the standard phraseology that appear in field data. The robust understanding algorithm developed has 95% concept accuracy from ATC text input. It also allows changes in the presentation order of the concepts and the correction of errors created by the speech recognition engine improving it by 17% and 25%, respectively, absolute in the percentage of fully correctly understood sentences for English and Spanish in relation to the percentages of fully correctly recognized sentences. The analysis of errors due to the spontaneity of the speech and its comparison to read speech is also carried out. A 96% word accuracy for read speech is reduced to 86% word accuracy for field ATC data for Spanish for the "clearances" task confirming that field data is needed to estimate the performance of a system. A literature review and a critical discussion on the possibilities of speech recognition and understanding technology applied to ATC speech are also given

    Speech to sign language translation system for Spanish

    Get PDF
    This paper describes the development of and the first experiments in a Spanish to sign language translation system in a real domain. The developed system focuses on the sentences spoken by an official when assisting people applying for, or renewing their Identity Card. The system translates official explanations into Spanish Sign Language (LSE: Lengua de Signos Espan¿ola) for Deaf people. The translation system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural language translator (for converting a word sequence into a sequence of signs belonging to the sign language), and a 3D avatar animation module (for playing back the hand movements). Two proposals for natural language translation have been evaluated: a rule-based translation module (that computes sign confidence measures from the word confidence measures obtained in the speech recognition module) and a statistical translation module (in this case, parallel corpora were used for training the statistical model). The best configuration reported 31.6% SER (Sign Error Rate) and 0.5780 BLEU (BiLingual Evaluation Understudy). The paper also describes the eSIGN 3D avatar animation module (considering the sign confidence), and the limitations found when implementing a strategy for reducing the delay between the spoken utterance and the sign sequence animation

    Desarrollo de un Robot-Guía con Integración de un Sistema de Diálogo y Expresión de Emociones: Proyecto ROBINT

    Get PDF
    Este artículo presenta la incorporación de un sistema de diálogo hablado a un robot autónomo, concebido como elemento interactivo en un museo de ciencias capaz de realizar visitas guiadas y establecer diálogos sencillos con los visitantes del mismo. Para hacer más atractivo su funcionamiento, se ha dotado al robot de rasgos (como expresividad gestual o síntesis de voz con emociones) que humanizan sus intervenciones. El reconocedor de voz es un subsistema independiente del locutor (permite reconocer el habla de cualquier persona), que incorpora medidas de confianza para mejorar las prestaciones del reconocimiento, puesto que se logra un filtrado muy importante de habla parásita. En cuanto al sistema de comprensión, hace uso de un sistema de aprendizaje basado en reglas, lo que le permite inferir información explícita de un conjunto de ejemplos, sin que sea necesario generar previamente una gramática o un conjunto de reglas que guíen al módulo de comprensión. Estos subsistemas se han evaluado previamente en una tarea de control por voz de un equipo HIFI, empleando nuestro robot como elemento de interfaz, obteniendo valores de 95,9% de palabras correctamente reconocidas y 92,8% de conceptos reconocidos. En cuanto al sistema de conversión de texto a voz, se ha implementado un conjunto de modificaciones segmentales y prosódicas sobre una voz neutra, que conducen a la generación de emociones en la voz sintetizada por el robot, tales como alegría, enfado, tristeza o sorpresa. La fiabilidad de estas emociones se ha medido con varios experimentos perceptuales que arrojan resultados de identificación superiores al 70% para la mayoría de las emociones, (87% en tristeza, 79,1% en sorpresa)

    Spanish Expressive Voices: corpus for emotion research in Spanish

    Get PDF
    A new emotional multimedia database has been recorded and aligned. The database comprises speech and video recordings of one actor and one actress simulating a neutral state and the Big Six emotions: happiness, sadness, anger, surprise, fear and disgust. Due to a careful design and its size (more than 100 minutes per emotion), the recorded database allows comprehensive studies on emotional speech synthesis, prosodic modelling, speech conversion, far-field speech recognition and speech and video-based emotion identification. The database has been automatically labelled for prosodic purposes (5% was manually revised). The whole database has been validated thorough objective and perceptual tests, achieving a validation score as high as 89%

    Cranial and extracranial giant cell arteritis do not exhibit differences in the IL6 -174 G/C gene polymorphism

    Get PDF
    Since interleukin-6 (IL-6) is a pivotal proinflammatory cytokine implicated in the pathogenesis of giant cell arteritis (GCA), we aimed to determine the potential association of the functional IL6 -174 G/C polymorphism with GCA as well as if the single base change variation at the promoter region in the human IL-6 gene may account for differences in the clinical spectrum of GCA between cranial and extracranial large vessel vasculitis (LVV)-GCA

    SARS-CoV-2 mutant spectra reveal differences between COVID-19 severity categories

    Get PDF
    Trabajo presentado en el XVI Congreso Nacional de Virología, celebrado en Málaga (España) del 06 al 09 de septiembre de 2022.RNA virus populations are composed of complex mixtures of genomes that are termed mutant spectra. SARS-CoV-2 replicates as a viral quasispecies, and mutations that are detected at low frequencies in a host can be dominant in subsequent variants. We have studied mutant spectrum complexities of SARS-CoV-2 populations derived from thirty nasopharyngeal swabs of patients infected during the first wave (April 2020) in the Hospital Universitario Fundación Jiménez Díaz. The patients were classified according to the COVID-19 severity in mild (non-hospitalized), moderate (hospitalized) and exitus (hospitalized with ICU admission and who passed away due to COVID-19). Using ultra-deep sequencing technologies (MiSeq, Illumina), we have examined four amplicons of the nsp12 (polymerase)-coding region and two amplicons of the spike-coding region. Ultra-deep sequencing data were analyzed with different cut-off frequency for mutation detection. Average number of different point mutations, mutations per haplotype and several diversity indices were significantly higher in SARS-CoV-2 isolated from patients who developed mild disease. A feature that we noted in the SARS-CoV-2 mutant spectra from diagnostic samples is the remarkable absence of mutations at intermediate frequencies, and an overwhelming abundance of mutations at frequencies lower than 10%. Thus, the decrease of the cut-off frequency for mutation detection from 0.5% to 0.1% revealed an increasement (50- to 100 fold) in the number of different mutations. The significantly higher frequency of mutations in virus from patients displaying mild than moderate or severe disease was maintained with the 0.1% cut- off frequency. To evaluate whether the frequency repertoire of amino acid substitutions differed between SARS-CoV-2 and the well characterized hepatitis C virus (HCV), we performed a comparative study of mutant spectra from infected patients using the same bioinformatics pipelines. HCV did not show the deficit of intermediate frequency substitutions that was observed with SARS-CoV-2. This difference was maintained when two functionally equivalent proteins, the corresponding viral polymerases, were compared. In conclusion, SARS-CoV-2 mutant spectra are rich reservoirs of mutants, whose complexity is not uniform among clinical isolates. Virus from patients who developed mild disease may be a source of new variants that may acquire epidemiological relevance.This work was supported by Instituto de Salud Carlos III, Spanish Ministry of Science and In-novation (COVID-19 Research Call COV20/00181), and co-financed by European Development Regional Fund ‘A way to achieve Europe’. The work was also supported by grants CSIC-COV19-014 from Consejo Superior de Investigaciones Científicas (CSIC), project 525/C/2021 from Fundació La Marató de TV3, PID2020-113888RB-I00 from Ministerio de Ciencia e Innovación, BFU2017-91384-EXP from Ministerio de Ciencia, Innovación y Universidades (MCIU), PI18/00210 and PI21/00139 from Instituto de Salud Carlos III, and S2018/BAA-4370 (PLATESA2 from Comunidad de Madrid/FEDER). C.P., M.C., and P.M. are supported by the Miguel Servet programme of the Instituto de Salud Carlos III (CPII19/00001, CPII17/00006, and CP16/00116, respectively) co-financed by the European Regional Development Fund (ERDF). CIBERehd (Centro de Investi-gación en Red de Enfermedades Hepáticas y Digestivas) is funded by Instituto de Salud Carlos III. Institutional grants from the Fundación Ramón Areces and Banco Santander to the CBMSO are also acknowledged. The team at CBMSO belongs to the Global Virus Network (GVN). B.M.-G. is supported by predoctoral contract PFIS FI19/00119 from Instituto de Salud Carlos III (Ministerio de Sanidad y Consumo) cofinanced by Fondo Social Europeo (FSE). R.L.-V. is supported by predoctoral contract PEJD-2019-PRE/BMD-16414 from Comunidad de Madrid. C.G.-C. is sup-ported by predoctoral contract PRE2018-083422 from MCIU. BS was supported by a predoctoral research fellowship (Doctorados Industriales, DI-17-09134) from Spanish MINECO
    corecore