42 research outputs found

    Machine learning approaches to video activity recognition: from computer vision to signal processing

    Get PDF
    244 p.La investigación presentada se centra en técnicas de clasificación para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vídeos y el reconocimiento de lengua de signos.En la primera parte, la hipótesis de partida es que la transformación de las señales de un vídeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglés, comúnmente utilizado en sistemas de Electroencefalografía) puede dar lugar a nuevas características que serán útiles para la posterior clasificación de los vídeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigación desde el punto de vista de un robot humanoide, con la intención de implementar el sistema de reconocimiento desarrollado para mejorar la interacción humano-robot.En la segunda parte, las técnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un método basado en la descomposición de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores

    Deep Learning eredu batean oinarritutako irudien analisi eta azterketaren prototipo orokorgarri baten garapena: Nvidia Jetson TX1 plataforman inplementatuta

    Get PDF
    [ES] En este proyecto se ha trabajado con diferentes métodos utilizados en visión por computador con el objetivo de usarlos para la clasificación de imágenes. Para ello, se ha llevado a cabo un pre-procesado de imágenes y se han desarrollado diferentes modelos de clasificación. Concretamente se han desarrollado y probado diferentes técnicas de machine learning y deep learning, analizando los resultados conseguidos en la clasificación de unas imágenes. También se ha utilizado un software que ofrece NVIDIA, DIGITS, para hacer diferentes pruebas. Aparte de eso, se ha trabajado con diferente hardware utilizado para el deep learning, como Jetson TX1, con el fin de tener un primer contacto con las tecnologías utilizadas en los robots móviles.[EU] Proiektu honetan, konputagailu bidezko ikusmenean erabilitako metodoak landu dira irudien klasifikaziorako erabiltzeko helburuarekin. Horretarako, irudien aurreprozesaketa eta eredu desberdinen garapena gauzatu da. Zehazki, machine learning eta deep learning teknika desberdinak garatu eta probatu dira, irudi jakin batzuen klasifikazioan lortutako emaitzak aztertuz. NVIDIA-k eskainitako software bat, DIGITS, ere erabili da proba desberdinak egiteko. Horrez gain, deep learning-erako erabiltzen den hardware desberdinarekin lan egin da, hala nola Jetson TX1, robot mugikorretan erabiltzen den teknologiarekin lehenengo kontaktu bat edukitzeko.[EN] In this project we have worked with different methods used in computer vision with the objective of using them for the classification of images. For that, a pre-processing of images has been carried out and different classification models have been developed. Specifically, different techniques of machine learning and deep learning have been developed and tested, analyzing the results obtained in the classification of some images. It has also been used a software offered by NVIDIA, DIGITS, to do some more tests. Apart from that, we have worked with different hardware used for deep learning, such as Jetson TX1, in order to have a first contact with the technologies used in mobile robots

    Deep Learning eredu batean oinarritutako irudien analisi eta azterketaren prototipo orokorgarri baten garapena: Nvidia Jetson TX1 plataforman inplementatuta

    Get PDF
    [ES] En este proyecto se ha trabajado con diferentes métodos utilizados en visión por computador con el objetivo de usarlos para la clasificación de imágenes. Para ello, se ha llevado a cabo un pre-procesado de imágenes y se han desarrollado diferentes modelos de clasificación. Concretamente se han desarrollado y probado diferentes técnicas de machine learning y deep learning, analizando los resultados conseguidos en la clasificación de unas imágenes. También se ha utilizado un software que ofrece NVIDIA, DIGITS, para hacer diferentes pruebas. Aparte de eso, se ha trabajado con diferente hardware utilizado para el deep learning, como Jetson TX1, con el fin de tener un primer contacto con las tecnologías utilizadas en los robots móviles.[EU] Proiektu honetan, konputagailu bidezko ikusmenean erabilitako metodoak landu dira irudien klasifikaziorako erabiltzeko helburuarekin. Horretarako, irudien aurreprozesaketa eta eredu desberdinen garapena gauzatu da. Zehazki, machine learning eta deep learning teknika desberdinak garatu eta probatu dira, irudi jakin batzuen klasifikazioan lortutako emaitzak aztertuz. NVIDIA-k eskainitako software bat, DIGITS, ere erabili da proba desberdinak egiteko. Horrez gain, deep learning-erako erabiltzen den hardware desberdinarekin lan egin da, hala nola Jetson TX1, robot mugikorretan erabiltzen den teknologiarekin lehenengo kontaktu bat edukitzeko.[EN] In this project we have worked with different methods used in computer vision with the objective of using them for the classification of images. For that, a pre-processing of images has been carried out and different classification models have been developed. Specifically, different techniques of machine learning and deep learning have been developed and tested, analyzing the results obtained in the classification of some images. It has also been used a software offered by NVIDIA, DIGITS, to do some more tests. Apart from that, we have worked with different hardware used for deep learning, such as Jetson TX1, in order to have a first contact with the technologies used in mobile robots

    HAKA: HierArchical Knowledge Acquisition in a sign language tutor

    Get PDF
    Communication between people from different communities can sometimes be hampered by the lack of knowledge of each other's language. A large number of people needs to learn a language in order to ensure a fluid communication or want to do it just out of intellectual curiosity. To assist language learners' needs tutor tools have been developed. In this paper we present a tutor for learning the basic 42 hand configurations of the Spanish Sign Language, as well as more than one hundred of common words. This tutor registers the user image from an off-the-shelf webcam and challenges her to perform the hand configuration she chooses to practice. The system looks for the configuration, out of the 42 in its database, closest to the configuration performed by the user, and shows it to her, to help her to improve through knowledge of her errors in real time. The similarities between configurations are computed using Procrustes analysis. A table with the most frequent mistakes is also recorded and available to the user. The user may advance to choose a word and practice the hand configurations needed for that word. Sign languages have been historically neglected and deaf people still face important challenges in their daily activities. This research is a first step in the development of a Spanish Sign Language tutor and the tool is available as open source. A multidimensional scaling analysis of the clustering of the 42 hand configurations induced by Procrustes similarity is also presented.This work has been partially funded by the Basque Government, Spain, under Grant number IT1427-22; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research

    dbcsp: User-friendly R package for Distance-Based Common Spacial Patterns

    Full text link
    Common Spatial Patterns (CSP) is a widely used method to analyse electroencephalography (EEG) data, concerning the supervised classification of the activity of brain. More generally, it can be useful to distinguish between multivariate signals recorded during a time span for two different classes. CSP is based on the simultaneous diagonalization of the average covariance matrices of signals from both classes and it allows the data to be projected into a low-dimensional subspace. Once the data are represented in a low-dimensional subspace, a classification step must be carried out. The original CSP method is based on the Euclidean distance between signals, and here we extend it so that it can be applied on any appropriate distance for data at hand. Both the classical CSP and the new Distance-Based CSP (DB-CSP) are implemented in an R package, called dbcsp

    Using Common Spatial Patterns to Select Relevant Pixels for Video Activity Recognition

    Get PDF
    first_page settings Open AccessArticle Using Common Spatial Patterns to Select Relevant Pixels for Video Activity Recognition by Itsaso Rodríguez-Moreno * [OrcID] , José María Martínez-Otzeta [OrcID] , Basilio Sierra [OrcID] , Itziar Irigoien , Igor Rodriguez-Rodriguez and Izaro Goienetxea [OrcID] Department of Computer Science and Artificial Intelligence, University of the Basque Country, Manuel Lardizabal 1, 20018 Donostia-San Sebastián, Spain * Author to whom correspondence should be addressed. Appl. Sci. 2020, 10(22), 8075; https://doi.org/10.3390/app10228075 Received: 1 October 2020 / Revised: 30 October 2020 / Accepted: 11 November 2020 / Published: 14 November 2020 (This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅱ) Download PDF Browse Figures Abstract Video activity recognition, despite being an emerging task, has been the subject of important research due to the importance of its everyday applications. Video camera surveillance could benefit greatly from advances in this field. In the area of robotics, the tasks of autonomous navigation or social interaction could also take advantage of the knowledge extracted from live video recording. In this paper, a new approach for video action recognition is presented. The new technique consists of introducing a method, which is usually used in Brain Computer Interface (BCI) for electroencephalography (EEG) systems, and adapting it to this problem. After describing the technique, achieved results are shown and a comparison with another method is carried out to analyze the performance of our new approach.This work has been partially funded by the Basque Government, Research Teams grant number IT900-16, ELKARTEK 3KIA project KK-2020/00049, and the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), and the European Regional Development Fund (FEDER), grant number RTI2018-093337-B-I100 (MCIU/AEI/FEDER, UE). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research

    RANSAC for Robotic Applications: A Survey

    Get PDF
    Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737

    New perspectives on corpora amylacea in the human brain

    Get PDF
    Corpora amylacea are structures of unknown origin and function that appear with age in human brains and are profuse in selected brain areas in several neurodegenerative conditions. They are constituted of glucose polymers and may contain waste elements derived from different cell types. As we previously found on particular polyglucosan bodies in mouse brain, we report here that corpora amylacea present some neo-epitopes that can be recognized by natural antibodies, a certain kind of antibodies that are involved in tissue homeostasis. We hypothesize that corpora amylacea, and probably some other polyglucosan bodies, are waste containers in which deleterious or residual products are isolated to be later eliminated through the action of the innate immune system. In any case, the presence of neo-epitopes on these structures and the existence of natural antibodies directed against them could become a new focal point for the study of both age-related and degenerative brain processes

    Robotics as a didactic tool for students with autism spectrum disorders: a systematic review

    Get PDF
    Este artículo describe los resultados cuantitativos y cualitativos de un estudio cuyo objetivo es identificar las tendencias y oportunidades de innovación en el campo de la robótica socioeducativa, utilizada como herramienta didáctica, con el fin de desarrollar las diferentes habilidades, destrezas y competencias de los estudiantes con necesidades específicas de apoyo educativo escolarizados; estudiantes con trastornos del espectro del autismo específicamente. Para ello se ha realizado una revisión sistemática de la literatura, mediante una estrategia de búsqueda rigurosamente definida. Los resultados obtenidos permiten identificar los avances en cuanto a modelos didácticos basados en el uso de la robótica como herramienta educativa, actividades pedagógicas y recursos didácticos; criterios, estrategias e instrumentos de evaluación y experiencias de aplicación en contextos escolares reales.  This article describes the quantitative and qualitative results of a study that aimed to identify the trends and innovation opportunities in the field of socio-educational robotics used as a didactic tool, in order to develop the different skills, abilities and competencies of students with specific educational support needs which attend to mainstream school classes; students with autism spectrum disorders specifically. A systematic review of the literature was carried out, through a rigorously defined search strategy. The results obtained allow us to identify the advances in terms of didactic models based on the use of robotics as an educational tool, pedagogical activities and didactic resources; evaluation criteria, strategies and instruments, and experiences of application in real school contexts.&nbsp

    Study of the transport of substances across the blood-brain barrier with the 8D3 anti-transferrin receptor antibody

    Get PDF
    Podeu consultar el llibre complet a: http://hdl.handle.net/2445/128014Numerous strategies have been proposed to overcome the blood-brain barrier (BBB) and efficiently deliver therapeutic agents to the brain. One of these strategies consists of linking the pharmacologically active substance to a molecular vector that acts as a molecular Trojan Horse and is capable of crossing the BBB using a receptor-mediated transcellular transport system of the brain capillary endothelial cells (BCECs). The transferrin receptor (TfR) is related to a transcytosis process in these cells, and the 8D3 monoclonal antibody (mAb), directed against the mouse TfR, is able to induce a receptor response. Thus, the 8D3 antibody could be a potential molecular Trojan Horse to transport pharmacologically active substances across the BBB. On these bases, a series of experiments were performed where the 8D3 antibody was conjugated to different cargoes, the resulting constructs were administered in vivo to mice, and the distribution and intracellular mechanisms that these constructs undergo at the BBB were studied. Our results indicated a TfR-mediated and clathrin-dependent internalization process by which the 8D3-cargo constructs enters the BCEC. The resulting endocytic vesicles follow at least two different routes. On one hand, most vesicles enter intracellular processes of vesicular fusion and rearrangement in which the cargo is guided to late endosomes, multivesicular bodies or lysosomes. On the other hand, a small but not negligible percentage of the vesicles follow a different route in which they fuse with the abluminal membrane and open towards the basal lamina, indicating a potential route for the delivery of therapeutic substances. In this route, however, the 8D3−cargo remain fixed to the abluminal membrane, indicating that the 8D3 is maintained linked to the TfR, and the cargo does not go beyond the basal membrane. Altogether, different optimization approaches need to be developed for efficient drug delivery, but receptor-mediated transport (RMT) continues to be one of the most promising strategies to overcome the BBB
    corecore