5 research outputs found

    Reconocimiento de comandos de voz en español orientado al control de una silla de ruedas

    Get PDF
    This paper presents a computer application that recognizes Spanish voice command for a speaker independent closed vocabulary. The Spanish language model adopted is the one provided for Microsoft® SAPI (Speech Application Program Interface). This language model was limited to recognize only the grammar related with the functionalities that the user of the automated wheelchair studied by the Automatica research group of the Universidad Autónoma de Manizales can handle. The testing for measure the recognition system performance was implemented discriminately by gender and was developed in three environments with noise level ranges differentiated according the current Colombian legislation about maximum permissible ambient noise levels. It is highlighted that the recognition obtained is speaker independent without requiring the extensive previous training that with other tools should be done.Se presenta una aplicación computacional que reconoce instrucciones de voz en español para un vocabulario cerrado e independiente del hablante, adoptando el modelo de lenguaje que para el español proporciona la SAPI (Interfaz de Programación de Aplicaciones de Voz) de Microsoft®, de manera que reconozca solo la gramática relacionada con las funcionalidades que el usuario de la silla de ruedas automatizada que se trabaja al interior del grupo de investigación de Automática de la Universidad Autónoma de Manizales va a manejar. Las pruebas para medir el desempeño del sistema de reconocimiento se realizan de manera discriminada por género y se desarrollan en tres ambientes con rangos de nivel de ruido diferenciados según la actual legislación Colombiana sobre niveles máximos permisibles de ruido ambiental. Se resalta que el reconocimiento obtenido es independiente del hablante sin necesitar de los extensos entrenamientos previos que con otras herramientas se debe hacer

    Hybrid wheelchair controller for handicapped and quadriplegic patients

    Get PDF
    In this dissertation, a hybrid wheelchair controller for handicapped and quadriplegic patient is proposed. The system has two sub-controllers which are the voice controller and the head tilt controller. The system aims to help quadriplegic, handicapped, elderly and paralyzed patients to control a robotic wheelchair using voice commands and head movements instead of a traditional joystick controller. The multi-input design makes the system more flexible to adapt to the available body signals. The low-cost design is taken into consideration as it allows more patients to use this system

    Reconocimiento de comandos de voz en español orientado al control de una silla de ruedas

    Get PDF
    El propósito de un sistema de reconocimiento del habla es tomar como entrada la forma de onda acústica de la voz humana y producir como salida una cadena de palabras equivalente [1]. Para lograr dicho resultado, la señal de voz ingresa a un módulo de procesamiento de señales en el que se extraen los vectores de características sobresalientes que son enviados posteriormente al decodificador; el decodificador utiliza tanto un modelo acústico como un modelo de lenguaje para generar finalmente la secuencia de palabras que tienen la máxima probabilidad de asemejarse a los vectores de características de entrada [2]. El modelo acústico es esencial para definir el comportamiento del sistema, este se obtiene con corpus de habla (ficheros de voz que contienen los datos de una amplia población de oradores con su correspondiente transcripción) de voces recogidas en el mismo idioma en el que se realizará el reconocimiento, mientras más robusto sea el corpus mejor será su desempeño. Si bien existen varias herramientas de software para realizar desarrollos con reconocimiento del habla, el hecho de que este proyecto es desarrollado para comandos en español limita su escogencia y finalmente se opta por hacer el desarrollo con el SAPI de Microsoft que para este idioma ya tiene un desarrollo importante. Otras herramientas como “Julius” solo ponen a disposición modelos acústicos completos en japonés o en otros pocos idiomas principalmente el inglés.The purpose of a speech recognition system is to take the acoustic waveform of the human voice as input and produce an equivalent word string as output [1]. To achieve this result, the voice signal enters a signal processing module in which the vectors with outstanding characteristics are extracted and subsequently sent to the decoder; the decoder uses both an acoustic model and a language model to finally generate the sequence of words that are most likely to resemble the input feature vectors [2]. The acoustic model is essential to define the behavior of the system, this is obtained with speech corpus (voice files that contain the data of a large population of speakers with their corresponding transcription) of voices collected in the same language in which it will be performed. recognition, the more robust the corpus, the better its performance. Although there are several software tools to carry out development with speech recognition, the fact that this project is developed for commands in Spanish limits its choice and finally it is decided to do the development with Microsoft's SAPI, which for this language already has a important development. Other tools such as "Julius" only make available complete acoustic models in Japanese or in a few other languages, mainly English

    Design and evaluation of a multimodal assistive technology using tongue commands, head movements, and speech recognition for people with tetraplegia

    Get PDF
    People with high level (C1-C4) spinal cord injury (SCI) cannot use their limbs to do the daily life activities by themselves without assistance. Current assistive technologies (ATs) use remaining capabilities (tongue, muscle, brain, speech, sniffing) as an input method to help them control devices (computer, smartphone). However, these ATs are not very efficient as compared to the gold standards (mouse and keyboards, touch interfaces, joysticks, and so forth) which are being used in everyday life. Therefore, in this work, a novel multimodal assistive system is designed to provide better accessibility more intuitively. The multimodal Tongue Drive System (mTDS) utilizes three key remaining abilities (speech, tongue and head movements) to help people with tetraplegia control the environments such as accessing computers, smartphones or driving wheelchairs. Tongue commands are used as discrete/switch like inputs and head movements as proportional/continuous type inputs, and speech recognition to type texts faster compared to any keyboards to emulate a mouse-keyboard combined system to access computers/ smartphones. Novel signal processing algorithms are developed and implemented in the wearable unit to provide universal access to multiple devices from the wireless mTDS. Non-disabled subjects participated in multiple studies to find the efficacy of mTDS in comparison to gold standards, and people with tetraplegia to evaluate technology learning abilities. Significant improvements are observed in terms of increasing accuracy and speed while doing different computer access and wheelchair mobility tasks. Thus, with sufficient learning of mTDS, it is feasible to reduce the performance gap between a non-disabled and a person with tetraplegia compared to the existing ATs.Ph.D

    Plataforma embarcada de reconhecimento autom?tico da fala para o aux?lio de pessoas com mobilidade reduzida

    Get PDF
    A busca por maior independ?ncia e autonomia para as pessoas com defici?ncia tem se apresentado como um fator decisivo ao proporcionar uma melhoria na qualidade de vida desses indiv?duos atrav?s do uso de tecnologias assistivas. A fala se constitui na mais b?sica, comum e eficiente forma de comunica??o entre os seres humanos, de modo que a entrada de comandos por voz pode ser uma alternativa para que pessoas com mobilidade reduzida, e que tenham preservada boa capacidade das habilidades da fala, realizem o controle do computador ou outros dispositivos. O objetivo deste trabalho consiste no desenvolvimento de uma interface de comandos por voz, atrav?s do reconhecimento autom?tico da fala, que seja facilmente adaptada e incorporada a sistemas e ferramentas de aux?lio ao controle do ambiente dom?stico (dom?tica). Com esse intuito, foram executadas duas abordagens de desenvolvimento. A primeira consistiu de um experimento piloto realizado com o intuito de formar uma base inicial de conhecimento no desenvolvimento de aplica??es utilizando o reconhecimento de comandos por voz. Esta etapa baseou-se na utiliza??o de um m?dulo de hardware espec?fico, que recebe os comando de voz diretamente atrav?s de um microfone, constituindo-se de um sistema dependente de locutor capaz de reconhecer comandos de palavras isoladas para o controle das luzes de umLED RGB. J? a segunda abordagem, integra componentes de hardware aberto e software livre e de c?digo aberto, sendo os comandos de voz fornecidos ao sistema atrav?s de um smartphone configurado com softphone VoIP (Voz sobre IP). Nesse ?ltimo caso, o softphone, ent?o, se registra no servidor de comunica??o Asterisk, que implementa uma central telef?nica com unidade de resposta aud?vel (URA). Integrada ao servidor, est? a ferramenta de reconhecimento da fala, Julius. Esses componentes est?o embarcados na plataforma Beaglebone Black, de baixo custo. O sistema ? dependente de locutor e capaz de reconhecer frases com tr?s palavras para o controle da ilumina??o, televis?o e acesso a portas de umambiente dom?stico hipot?tico constitu?do de sala, cozinha, quarto, banheiro e ?rea externa. Os resultados obtidos a partir dos testes realizados indicam taxas de acerto de 95,9% e 94,77% para as interfaces desenvolvidas na primeira e segunda abordagens, respectivamente. Esses ?ndices sugeremque ? vi?vel o emprego dos m?dulos de reconhecimento desenvolvidos na implementa??o de solu??es de tecnologias assistivas
    corecore