138 research outputs found

    Sign language video retrieval with free-form textual queries

    Get PDF
    Systems that can efficiently search collections of sign language videos have been highlighted as a useful application of sign language technology. However, the problem of searching videos beyond individual keywords has received limited attention in the literature. To address this gap, in this work we introduce the task of sign language retrieval with textual queries: given a written query (e.g. a sentence) and a large collection of sign language videos, the objective is to find the signing video that best matches the written query. We propose to tackle this task by learning cross-modal embeddings on the recently introduced large-scale How2Sign dataset of American Sign Language (ASL). We identify that a key bottleneck in the performance of the system is the quality of the sign video embedding which suffers from a scarcity of labelled training data. We, therefore, propose SPOT-ALIGN, a framework for interleaving iterative rounds of sign spotting and feature alignment to expand the scope and scale of available training data. We validate the effectiveness of SPOT-ALIGN for learning a robust sign video embedding through improvements in both sign recognition and the proposed video retrieval task.This work was supported by the project PID2020-117142GB-I00, funded by MCIN/ AEI /10.13039/501100011033, ANR project CorVis ANR-21-CE23-0003- 01, and gifts from Google and Adobe. AD received support from la Caixa Foundation (ID 100010434), fellowship code LCF/BQ/IN18/11660029.Peer ReviewedObjectius de Desenvolupament Sostenible::10 - Reducció de les DesigualtatsObjectius de Desenvolupament Sostenible::10 - Reducció de les Desigualtats::10.2 - Per a 2030, potenciar i promoure la inclusió social, econòmica i política de totes les persones, independentment de l’edat, sexe, discapacitat, raça, ètnia, origen, religió, situació econòmica o altra condicióPostprint (author's final draft

    Data and methods for a visual understanding of sign languages

    Get PDF
    Signed languages are complete and natural languages used as the first or preferred mode of communication by millions of people worldwide. However, they, unfortunately, continue to be marginalized languages. Designing, building, and evaluating models that work on sign languages presents compelling research challenges and requires interdisciplinary and collaborative efforts. The recent advances in Machine Learning (ML) and Artificial Intelligence (AI) has the power to enable better accessibility to sign language users and narrow down the existing communication barrier between the Deaf community and non-sign language users. However, recent AI-powered technologies still do not account for sign language in their pipelines. This is mainly because sign languages are visual languages, that use manual and non-manual features to convey information, and do not have a standard written form. Thus, the goal of this thesis is to contribute to the development of new technologies that account for sign language by creating large-scale multimodal resources suitable for training modern data-hungry machine learning models and developing automatic systems that focus on computer vision tasks related to sign language that aims at learning better visual understanding of sign languages. Thus, in Part I, we introduce the How2Sign dataset, which is a large-scale collection of multimodal and multiview sign language videos in American Sign Language. In Part II, we contribute to the development of technologies that account for sign languages by presenting in Chapter 4 a framework called Spot-Align, based on sign spotting methods, to automatically annotate sign instances in continuous sign language. We further present the benefits of this framework and establish a baseline for the sign language recognition task on the How2Sign dataset. In addition to that, in Chapter 5 we benefit from the different annotations and modalities of the How2Sign to explore sign language video retrieval by learning cross-modal embeddings. Later in Chapter 6, we explore sign language video generation by applying Generative Adversarial Networks to the sign language domain and assess if and how well sign language users can understand automatically generated sign language videos by proposing an evaluation protocol based on How2Sign topics and English translationLes llengües de signes són llengües completes i naturals que utilitzen milions de persones de tot el món com mode de comunicació primer o preferit. Tanmateix, malauradament, continuen essent llengües marginades. Dissenyar, construir i avaluar tecnologies que funcionin amb les llengües de signes presenta reptes de recerca que requereixen d’esforços interdisciplinaris i col·laboratius. Els avenços recents en l’aprenentatge automàtic i la intel·ligència artificial (IA) poden millorar l’accessibilitat tecnològica dels signants, i alhora reduir la barrera de comunicació existent entre la comunitat sorda i les persones no-signants. Tanmateix, les tecnologies més modernes en IA encara no consideren les llengües de signes en les seves interfícies amb l’usuari. Això es deu principalment a que les llengües de signes són llenguatges visuals, que utilitzen característiques manuals i no manuals per transmetre informació, i no tenen una forma escrita estàndard. Els objectius principals d’aquesta tesi són la creació de recursos multimodals a gran escala adequats per entrenar models d’aprenentatge automàtic per a llengües de signes, i desenvolupar sistemes de visió per computador adreçats a una millor comprensió automàtica de les llengües de signes. Així, a la Part I presentem la base de dades How2Sign, una gran col·lecció multimodal i multivista de vídeos de la llengua de signes nord-americana. A la Part II, contribuïm al desenvolupament de tecnologia per a llengües de signes, presentant al capítol 4 una solució per anotar signes automàticament anomenada Spot-Align, basada en mètodes de localització de signes en seqüències contínues de signes. Després, presentem els avantatges d’aquesta solució i proporcionem uns primers resultats per la tasca de reconeixement de la llengua de signes a la base de dades How2Sign. A continuació, al capítol 5 aprofitem de les anotacions i diverses modalitats de How2Sign per explorar la cerca de vídeos en llengua de signes a partir de l’entrenament d’incrustacions multimodals. Finalment, al capítol 6, explorem la generació de vídeos en llengua de signes aplicant xarxes adversàries generatives al domini de la llengua de signes. Avaluem fins a quin punt els signants poden entendre els vídeos generats automàticament, proposant un nou protocol d’avaluació basat en les categories dins de How2Sign i la traducció dels vídeos a l’anglès escritLas lenguas de signos son lenguas completas y naturales que utilizan millones de personas de todo el mundo como modo de comunicación primero o preferido. Sin embargo, desgraciadamente, siguen siendo lenguas marginadas. Diseñar, construir y evaluar tecnologías que funcionen con las lenguas de signos presenta retos de investigación que requieren esfuerzos interdisciplinares y colaborativos. Los avances recientes en el aprendizaje automático y la inteligencia artificial (IA) pueden mejorar la accesibilidad tecnológica de los signantes, al tiempo que reducir la barrera de comunicación existente entre la comunidad sorda y las personas no signantes. Sin embargo, las tecnologías más modernas en IA todavía no consideran las lenguas de signos en sus interfaces con el usuario. Esto se debe principalmente a que las lenguas de signos son lenguajes visuales, que utilizan características manuales y no manuales para transmitir información, y carecen de una forma escrita estándar. Los principales objetivos de esta tesis son la creación de recursos multimodales a gran escala adecuados para entrenar modelos de aprendizaje automático para lenguas de signos, y desarrollar sistemas de visión por computador dirigidos a una mejor comprensión automática de las lenguas de signos. Así, en la Parte I presentamos la base de datos How2Sign, una gran colección multimodal y multivista de vídeos de lenguaje la lengua de signos estadounidense. En la Part II, contribuimos al desarrollo de tecnología para lenguas de signos, presentando en el capítulo 4 una solución para anotar signos automáticamente llamada Spot-Align, basada en métodos de localización de signos en secuencias continuas de signos. Después, presentamos las ventajas de esta solución y proporcionamos unos primeros resultados por la tarea de reconocimiento de la lengua de signos en la base de datos How2Sign. A continuación, en el capítulo 5 aprovechamos de las anotaciones y diversas modalidades de How2Sign para explorar la búsqueda de vídeos en lengua de signos a partir del entrenamiento de incrustaciones multimodales. Finalmente, en el capítulo 6, exploramos la generación de vídeos en lengua de signos aplicando redes adversarias generativas al dominio de la lengua de signos. Evaluamos hasta qué punto los signantes pueden entender los vídeos generados automáticamente, proponiendo un nuevo protocolo de evaluación basado en las categorías dentro de How2Sign y la traducción de los vídeos al inglés escrito.Teoria del Senyal i Comunicacion

    Käden konfiguraatioiden estimointi viittomakielisistä videoista

    Get PDF
    A computer vision system is presented that can locate and classify the handshape from an individual sign-language video frame, using a synthetic 3D model. The system requires no training data; only phonetically-motivated descriptions of sign-language hand configuration classes are required. Experiments were conducted with realistically low-quality sign-language video dictionary footage to test various features and metrics to fix the camera parameters of a fixed synthetic hand model to find the best match of the model to the input frame. Histogram of Oriented Gradients (HOG) features with Euclidean distance turned out to be suitable for this purpose. A novel approach, called Trimmed HOGs, with Earth Mover's Distance, as well as simplistic contours and Canny edges with the chamfer distance, also performed favorably. Minimizing the cost function built from these measures with gradient descent optimization further improved the camera parameter fitting results. Classification of images of handshapes into hand configuration classes with nearest-neighbor classifiers built around the chamfer distance between contours and Canny edges, and chi^2 distance between Pyramidal HOG descriptors turned out to yield reasonable accuracy. Although the system displayed only moderate success rates in a full 26-class scenario, the system was able to reach nearly perfect discriminatory accuracy in a binary classification case, and up to 40 % accuracy when images from a restricted set of 12 classes were classified into six hand configuration groups. Considering that the footage used to evaluate the system was of very poor quality, with future improvements, the methods evaluated may be used as basis for a practical system for automatic annotation of sign language video corpora.Työssä esitetään tietokonenäköjärjestelmä, joka pystyy löytämään ja luokittelemaan käsimuotoja yksittäisistä viittomakielisten videoiden ruuduista synteettistä 3D-mallia käyttäen. Järjestelmä ei vaadi opetusdataa; pelkät foneettisesti motivoidut kuvaukset käden konfiguraatioluokista riittävät. Kokeissa testattiin erilaisia piirteitä ja metriikoita staattisen käsimallin kameraparametrien kiinnittämiseksi, jotta löydettäisiin paras vastaavuus mallin ja syötekuvan välillä. Kokeet ajettiin realistisen heikkolaatuisella videoaineistolla. Gradienttihistogrammit euklidisella etäisyydellä osoittautuivat sopiviksi tähän tarkoitukseen. Uusi työssä esitetty lähestymistapa, jota kutsutaan trimmatuksi gradienttihistogrammiksi, maansiirtäjän etäisyyden (Earth Mover's Distance) kanssa toimi myös hyvin, kuten myös yksinkertaiset ääriviivat ja Canny-reunat chamfer-etäisyyden kanssa. Gradienttilaskeumaoptimointi (gradient descent optimization) paransi kameraparametrien sovitustuloksia. Syötekuvia luokiteltiin lähimmän naapurin luokittimilla, ja ääriviiva- ja Canny-reunapiirteiden chamfer-etäisyyteen sekä pyramidisten gradienttihistogrammien chi^2-etäisyyteen pohjautuvat luokittimet osoittautuivat toimiviksi. Vaikka järjestelmän luokittelutarkkuus jäi vaatimattomaksi täydessä 26 luokan tapauksessa, järjestelmä saavutti liki täydellisen luokittelutarkkuuden binääriluokittelutapauksessa, ja saavutti jopa 40 % tarkkuuden, kun 12 luokan osajoukosta poimittuja kuvia luokiteltiin kuuteen eri ryhmään. Ottaen huomioon aineiston heikosta laadusta johtuvan vaativuuden, voidaan pitää uskottavana, että esitettyjä menetelmiä voidaan käyttää käytännöllisen korpusaineiston automaattiseen annotointiin soveltuvan järjestelmän pohjana

    An exploratory study of informal science learning by children ages 2-12 years at selected U.S. children\u27s gardens

    Get PDF
    ABSTRACT This exploratory study was conducted at four children’s gardens in major botanical gardens across the United States to determine if children became more aware and knowledgeable of plants while visiting these gardens. This was determined through the children’s garden stakeholders’ perspectives; the stakeholders of this study were the children and parents who visited the gardens. Their views were acquired through on-site observations and interviews. The purposive sample comprised 64 participants including 40 children (19 girls and 21 boys, ages 2- 12 years). There were 18 mothers, 3 fathers, 3 grandmothers, and 1 grandfather. The 40 children were observed and 30 children were interviewed. A total of 25 parents or guardians were interviewed. This study determined that the children’s learning was contextual; i.e., influenced by the garden and participatory garden features they visited. For example, the children who visited the facilitated (by a trained volunteer) features that taught plant concepts were able to repeat and explain the lesson. However, in gardens that provided opportunities for independent exploration with natural components such as water, children made some very advanced observations about plants. This study also found that children’s previous experiences with plants heightened their awareness of plants in the children’s garden. Especially on their walks through the regular botanical garden areas to the children’s garden, many children noticed and asked questions about plants
    corecore