6 research outputs found

    Mutasd a hangod – automatikus jeltolmács

    Get PDF
    A Tolmácskesztyű projektben egy olyan segédeszközt alkotunk, mellyel a beszéd- és halláskárosult emberek kézmozgását, vagyis gesztusokat használva képesek a mindennapi életben kapcsolatot teremteni ép embertársaikkal. A kifejlesztett segédeszköz egy innovatív hardver-szoftver-rendszer, amely kézmozgást érzékelő kesztyűből valamint kézjeleket felismerő és nyelvi feldolgozást végző szoftverből áll. A Tolmácskesztyű eszközrendszer jelnyelvi szinkrontolmácsként működik, segítségével a fogyatékkal élők anyanyelvükön – vagyis jelnyelven – kommunikálhatnak az épekkel. A Tolmácskesztyű applikáció a jelelt szöveget hangosan felolvassa, így a sérültek és a (jelnyelvet nem ismerő) épek között folytonos kommunikáció jön létre

    A Weft Knit Data Glove

    Get PDF
    Rehabilitation of stoke survivors can be expedited by employing an exoskeleton. The exercises are designed such that both hands move in synergy. In this regard often motion capture data from the healthy hand is used to derive control behaviour for the exoskeleton. Therefore, data gloves can provide a low-cost solution for the motion capture of the joints in the hand. However, current data gloves are bulky, inaccurate or inconsistent. These disadvantages are inherited because the conventional design of a glove involves an external attachment that degrades overtime and causes inaccuracies. This paper presents a weft knit data glove whose sensors and support structure are manufactured in the same fabrication process thus removing the need for an external attachment. The glove is made by knitting multifilament conductive yarn and an elastomeric yarn using WholeGarment technology. Furthermore, we present a detailed electromechanical model of the sensors alongside its experimental validation. Additionally, the reliability of the glove is verified experimentally. Lastly, machine learning algorithms are implemented for classifying the posture of hand on the basis of sensor data histograms

    Effects of different push-to-talk solutions on driving performance

    Get PDF
    Police officers have been using the Project54 system in their vehicles for a number of years. They have also started using the handheld version of Project54 outside their vehicles recently. There is a need to connect these two instances of the system into a continuous user interface. On the other hand, research has shown that the PTT button location affects driving performance. This thesis investigates the difference between the old, fixed PTT button and a new wireless PTT glove, that could be used in and outside of the car. The thesis describes the design of the glove and the driving simulator experiment that was conducted to investigate the glove\u27s merit. The main results show that the glove allows more freedom of operation, appears to be easier and more efficient to operate and it reduces the visual distraction of the drivers

    SignPic: sistema móvel para deteção de língua gestual utilizando Machine Learning

    Get PDF
    O objetivo do trabalho proposto nesta Dissertação assenta na contribuição para colmatar a barreira de comunicação existente entre pessoas que comunicam utilizando a Língua Gestual Portuguesa e pessoas que comunicam utilizando a língua oral. Na língua gestual, a forma, o posicionamento e o movimento das mãos, bem como as expressões faciais e os movimentos corporais, desempenham papéis importantes para as pessoas comunicarem entre si. A motivação subjacente à escolha deste tema, centra-se na dificuldade existente na comunicação entre as pessoas que compreendem e utilizam a Língua Gestual Portuguesa e as pessoas que comunicam apenas em língua oral, impedindo que os gestos em Língua Gestual Portuguesa sejam corretamente compreendidos pelas pessoas ouvintes. Devido a não ter sido encontrado conjunto de dados referentes ao alfabeto da Língua Gestual Portuguesa para utilização no desenvolvimento do sistema, foi utilizado um conjunto de dados relativo ao alfabeto da Língua Gestual Americana. Neste contexto, optou-se por desenvolver uma aplicação que permite identificar e traduzir os gestos em Língua Gestual Americana, efetuados por uma pessoa utilizando uma câmara de vídeo convencional, presente na maioria dos smartphones atuais. Para atingir este objetivo, o software desenvolvido recorreu a técnicas de Inteligência Artificial designadas por Deep Learning. Para o treino do conjunto de dados foi utilizado um modelo pré treinado presente no Zoo de modelos disponibilizados pela biblioteca PyTorch, denominado Mobile- Net v2. A avaliação ao sistema proposto foi efetuada a partir de estatísticas guardadas ao longo dos treinos efetuados a vários modelos classificadores e também, a partir de testes, utilizando a câmara de um iPhone para obter as imagens que posteriormente foram classificadas no mesmo. Concluiu-se que com o sistema desenvolvido, apesar de o classificador ter atingido 99% de acurácia, durante a validação, ainda está longe de ser um sistema capaz de colmatar a barreira de comunicação entre pessoas que comunicam utilizando Língua Gestual Americana e as pessoas que comunicam usando língua oral.The objective of the work proposed in this Dissertation is to contribute to establishing a communication bridge between people who communicate using the Portuguese Sign Language and people who communicate using the oral language. In sign language, the shape, positioning and movement of the hands, as well as facial expressions and body movements, play an important role for people to communicate with each other. The motivation underlying the choice of this theme focuses on the difficulty existing in communication between people who understand and use Portuguese Sign Language and people who communicate only with oral language, preventing gestures in Portuguese Sign Language from being correctly understood by hearing people. Due to the lack of a dataset related to the alphabet of the Portuguese Sign Language to use in the development of the system, a dataset containing the alphabet of the American Sign Language was used. In this context, it was decided to develop an application that allows the identification and translation of gestures in American Sign Language, carried out by a person using a conventional video camera, present in most current smartphones. To achieve this goal, the software developed fall back on Artificial Intelligence techniques called Deep Learning. To train the dataset, a pre-trained model called MobileNet v2 was used. This model is present in the Zoo of models made available by the library PyTorch. The evaluation of the proposed system was conducted based on statistics saved during the training carried out on several classifying models and also, based on tests, using a iPhone camera to obtain the images that were later classified on it. It was concluded that with the developed system, although the classifier reached 99% accuracy, during the validation, it is still far from being a system capable of overcoming the communication barrier between people who communicate using American Sign Language and the people who communicate using oral language
    corecore