14 research outputs found

    Adaptive, Multisensorial, Physiological and Social: The Next Generation of Telerehabilitation Systems

    Get PDF
    Some people require special treatments for rehabilitating physical, cognitive or even social capabilities after an accident or degenerative illness. However, the ever-increasing costs of looking after an aging population, many of whom suffer chronic diseases, is straining the finances of healthcare systems around Europe. This situation has given rise to a great deal of attention being paid to the development of telerehabilitation (TR) systems, which have been designed to take rehabilitation beyond hospitals and care centers. In this article, we propose which features should be addressed in the development of TR systems, that is, they should consider adaptive, multisensorial, physiological and social aspects. For this aim, the research project Vi-SMARt is being conducted for evaluating whether and how different technologies, such as virtual reality (VR), multi-sensorial feedback, or telemonitoring, may be exploited for the development of the next generation of TR systems. Beyond traditional aural and visual feedback, the exploitation of haptic sense by using devices such as haptic gloves or wristbands, can provide patients with additional guidance in the rehabilitation process. For telemonitoring, Electroencephalography (EEG) devices show signs of being a promising approach, not only to monitor patients’ emotions, but also to obtain neuro-feedback useful for controlling his/her interaction with the system and thus to provide a better rehabilitation experience

    Quadrotor team modeling and control for DLO transportation

    Get PDF
    94 p.Esta Tesis realiza una propuesta de un modelado dinámico para el transporte de sólidos lineales deformables (SLD) mediante un equipo de cuadricópteros. En este modelo intervienen tres factores: - Modelado dinámico del sólido lineal a transportar. - Modelo dinámico del cuadricóptero para que tenga en cuenta la dinámica pasiva y los efectos del SLD. - Estrategia de control para un transporte e ciente y robusto. Diferenciamos dos tareas principales: (a) lograr una con guración cuasiestacionaria de una distribución de carga equivalente a transportar entre todos los robots. (b) Ejecutar el transporte en un plano horizontal de todo el sistema. El transporte se realiza mediante una con guración de seguir al líder en columna, pero los cuadricópteros individualmente tienen que ser su cientemente robustos para afrontar todas las no-linealidades provocadas por la dinámica del SLD y perturbaciones externas, como el viento. Los controladores del cuadricóptero se han diseñado para asegurar la estabilidad del sistema y una rápida convergencia del sistema. Se han comparado y testeado estrategias de control en tiempo real y no-real para comprobar la bondad y capacidad de ajuste a las condiciones dinámicas cambiantes del sistema. También se ha estudiado la escalabilidad del sistema

    Quadrotor team modeling and control for DLO transportation

    Get PDF
    94 p.Esta Tesis realiza una propuesta de un modelado dinámico para el transporte de sólidos lineales deformables (SLD) mediante un equipo de cuadricópteros. En este modelo intervienen tres factores: - Modelado dinámico del sólido lineal a transportar. - Modelo dinámico del cuadricóptero para que tenga en cuenta la dinámica pasiva y los efectos del SLD. - Estrategia de control para un transporte e ciente y robusto. Diferenciamos dos tareas principales: (a) lograr una con guración cuasiestacionaria de una distribución de carga equivalente a transportar entre todos los robots. (b) Ejecutar el transporte en un plano horizontal de todo el sistema. El transporte se realiza mediante una con guración de seguir al líder en columna, pero los cuadricópteros individualmente tienen que ser su cientemente robustos para afrontar todas las no-linealidades provocadas por la dinámica del SLD y perturbaciones externas, como el viento. Los controladores del cuadricóptero se han diseñado para asegurar la estabilidad del sistema y una rápida convergencia del sistema. Se han comparado y testeado estrategias de control en tiempo real y no-real para comprobar la bondad y capacidad de ajuste a las condiciones dinámicas cambiantes del sistema. También se ha estudiado la escalabilidad del sistema

    Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Aprendizaje máquina aplicado a la segmentación de imágenes ecográficas de la arteria carótida para la medida del grosor íntima-media

    Get PDF
    [SPA] Las enfermedades cardiovasculares son la principal causa de mortalidad, morbilidad y discapacidad a nivel mundial. Gran parte de estas patologías derivan de la aterosclerosis, una enfermedad que afecta a las arterias de mediano y gran calibre provocando su endurecimiento y pérdida de elasticidad. La aterosclerosis se caracteriza por el engrosamiento de la capa más interna de las paredes arteriales debido al depósito de materia grasa, colesterol y otras sustancias. Por tanto, produce un estrechamiento del lumen arterial dificultando el flujo sanguíneo normal. A largo plazo, puede llevar a una oclusión total del vaso afectado, impidiendo la llegada de oxígeno a la zona irrigada y provocando accidentes cardiovasculares severos. Así, es crucial el diagnóstico precoz de la aterosclerosis con fines preventivos. En este sentido, el grosor íntima-media o IMT (Intima-Media Thickness) de la arteria carótida común se considera un marcador precoz y fiable de la aterosclerosis y, por tanto, del riesgo cardiovascular. Las paredes de los vasos sanguíneos están formadas por tres capas, de la más interna a la más externa: íntima, media y adventicia. El IMT se define como la distancia entre las interfaces lumen-íntima y media-adventicia y es evaluado mediante imágenes ecográficas que muestran un corte longitudinal de la arteria carótida común. Esta modalidad de imagen es no-invasiva para el paciente y relativamente económica, aunque suele ser bastante ruidosa y muy dependiente del operador. Además, el IMT se suele evaluar de forma manual, marcando pares de puntos sobre la imagen. Estos aspectos dan un carácter subjetivo a la medida del IMT y afectan a su reproducibilidad. La motivación de esta Tesis Doctoral es la mejora del proceso de evaluación del IMT sobre ecografías de la arteria carótida común. El objetivo fundamental consiste en explorar y proponer diferentes soluciones basadas en técnicas de Aprendizaje Máquina adecuadas para la segmentación de estas imágenes. De esta forma, se pretende detectar las interfaces lumen-íntima y media-adventicia a nivel de la pared posterior del vaso para medir el IMT sin necesidad de la interacción con el usuario. Este hecho implica que las estrategias propuestas resulten adecuadas tanto para el diagnóstico en la práctica clínica diaria como para facilitar el desarrollo de estudios sobre un gran número de imágenes. En particular, el proceso de evaluación del IMT se lleva a cabo en tres etapas completamente automáticas. En la primera etapa se realiza un pre-procesado de las ecografías para detectar la región de interés, es decir, la pared posterior de la arteria carótida común. Seguidamente, se procede a la identificación de las interfaces que definen el IMT. Por último, una etapa de post-procesado depura los resultados y define los contornos finales sobre los que realizar la medida del IMT. Para la detección automática de la región de interés (ROI) se han estudiado dos propuestas diferentes: una basada en Morfología Matemática y otra basada en Aprendizaje Máquina. Sobre la ROI detectada, la segmentación de las interfaces lumen-íntima y media-adventicia se plantea como un problema de Reconocimiento de Patrones, a resolver mediante técnicas de Aprendizaje Máquina. Así, se han estudiado cuatro configuraciones diferentes, utilizando distintos algoritmos de entrenamiento, arquitecturas, representaciones de los datos de entrada y definiciones del espacio de salida. Por tanto, la segmentación se reduce a realizar una clasificación de los píxeles de la ecografía. El post-procesado ha sido adaptado a cada una de las estrategias de segmentación propuestas para detectar y eliminar los posibles errores de clasificación de forma automática. Una parte importante del estudio realizado se dedica a la validación de las técnicas de segmentación desarrolladas. Para ello, se ha utilizado un conjunto de 79 ecografías adquiridas con el mismo equipo de ultrasonidos, pero utilizando diferentes sondas y con diferentes resoluciones espaciales. Además, se ha realizado la segmentación manual de todas las imágenes por parte de dos expertos diferentes. Considerando como ground-truth el promedio de cuatro segmentaciones manuales, dos de cada experto, se han evaluado los errores de segmentación de las estrategias automáticas planteadas. El proceso de validación se completa con la comparación de las medidas automáticas y manuales del IMT. Para la evaluación de los resultados, se han empleado diagramas de cajas, análisis de regresión lineal, diagramas de Bland-Altman y diferentes parámetros estadísticos. Los procedimientos desarrollados han demostrado ser robustos frente al ruido y artefactos que puedan presentar las ecografías. También se adaptan a la variabilidad anatómica e instrumental de las imágenes, lográndose una segmentación correcta con independencia de la apariencia que muestre la arteria en la imagen. Los errores medios obtenidos son similares, o incluso inferiores, a los errores propios de otros métodos automáticos y semiautomáticos encontrados en la literatura. Además, como consecuencia de utilizar máquinas de aprendizaje, el proceso de segmentación destaca por su eficiencia computacional. [ENG] Cardiovascular diseases are the leading cause of mortality, morbidity and disability worldwide. Large proportion of these diseases results from atherosclerosis, an illness that affects arterial blood vessels causing the hardening and loss of elasticity of the walls of arteries. Atherosclerosis is characterized by the thickening of the innermost layer of the arterial walls due to the accumulation of fatty material, cholesterol and other substances. Therefore, it produces a narrowing of the arterial lumen which hinders the normal blood flow. In the long term, it can lead to an entire occlusion of the affected vessel, preventing the flow of oxygen to the irrigated area and causing severe cardiovascular accidents. Thus, an early diagnosis of atherosclerosis is crucial for preventive purposes. In this sense, the intima-media thickness (IMT) of the common carotid artery is an early and reliable indicator of atherosclerosis and, therefore, of the cardiovascular risk. The walls of blood vessels consist of three layers, from the innermost to the outermost: intima, media and adventitia. The IMT is defined as the distance between the lumen-intima and media-adventitia interfaces and it is assessed by means of ultrasound images showing longitudinal cuts of the common carotid artery. This imaging modality is noninvasive and relatively low-cost, although it tends to be quite noisy and highly operator dependent. Usually, IMT is manually measured by the specialist, who marks pairs of points on the image. These aspects give a subjective character to the IMT measurement and affect its reproducibility. The motivation of this Ph.D. Thesis is the improvement of the evaluation process of IMT in ultrasound images of the common carotid artery. The main objective is the exploration and the proposition of different solutions based on Machine Learning for segmenting these images. In this way, it is intended to detect the lumen-intima and media-adventitia interfaces in the posterior wall of the vessel to measure the IMT without user interaction. This means that the proposed strategies are suitable both for the diagnosis in daily clinical practice and to facilitate the development of studies with a large number of images. In particular, the evaluation process of IMT is carried out in three fully automatic stages. The first stage is a pre-processing of the ultrasound image in which the region of interest (ROI), i.e. the far-wall of the common carotid artery, is detected. Then, it proceeds to the identification of the interfaces defining the IMT. Finally, a post-processing stage debugs the results and defines the final contours on which IMT is evaluated. Two different proposals have been studied for the ROI detection: one based on Mathematical Morphology and the other based on Machine Learning. Once the ROI is detected, the segmentation of the lumen-intima interface and the media-adventitia interface is posed as a Pattern Recognition problem and it is solved by Machine Learning techniques. Thus, four different configurations have been developed by using distinct architectures, training algorithms, representations of input information and output space definitions. Therefore, segmentation is reduced to perform a classification of the pixels belonging to the ROI. The post-processing stage has been adapted to each one of the proposed segmentation strategies to detect and eliminate possible misclassifications in an automatic way. An important part of the present study is dedicated to the validation of the developed techniques. For this purpose, 79 images acquired with the same ultrasound equipment, but using different probes and different spatial resolutions, have been used. Two experts have performed the manual segmentation of all the images. Considering as ground-truth the average of four manual segmentations, two from each expert, the segmentation errors of the four different strategies have been evaluated. The validation process is completed with the comparison between automatic and manual IMT measurements. For an exhaustive characterization of the results, box plots, linear regression analysis, Bland-Altman plots and different statistical parameters have been used. Developed procedures have proven to be robust against noise and artifacts that may appear in the ultrasounds. They also adapt themselves to the anatomical and instrumental variability of the images, achieving a correct segmentation regardless of the appearance of the artery in the ultrasound. The obtained mean errors are similar, or even lower, than errors in other automatic and semi-automatic methods. Moreover, as a result of using learning machines, the segmentation process stands out for its computational efficiency.[ENG] Cardiovascular diseases are the leading cause of mortality, morbidity and disability worldwide. Large proportion of these diseases results from atherosclerosis, an illness that affects arterial blood vessels causing the hardening and loss of elasticity of the walls of arteries. Atherosclerosis is characterized by the thickening of the innermost layer of the arterial walls due to the accumulation of fatty material, cholesterol and other substances. Therefore, it produces a narrowing of the arterial lumen which hinders the normal blood flow. In the long term, it can lead to an entire occlusion of the affected vessel, preventing the flow of oxygen to the irrigated area and causing severe cardiovascular accidents. Thus, an early diagnosis of atherosclerosis is crucial for preventive purposes. In this sense, the intima-media thickness (IMT) of the common carotid artery is an early and reliable indicator of atherosclerosis and, therefore, of the cardiovascular risk. The walls of blood vessels consist of three layers, from the innermost to the outermost: intima, media and adventitia. The IMT is defined as the distance between the lumen-intima and media-adventitia interfaces and it is assessed by means of ultrasound images showing longitudinal cuts of the common carotid artery. This imaging modality is noninvasive and relatively low-cost, although it tends to be quite noisy and highly operator dependent. Usually, IMT is manually measured by the specialist, who marks pairs of points on the image. These aspects give a subjective character to the IMT measurement and affect its reproducibility. The motivation of this Ph.D. Thesis is the improvement of the evaluation process of IMT in ultrasound images of the common carotid artery. The main objective is the exploration and the proposition of different solutions based on Machine Learning for segmenting these images. In this way, it is intended to detect the lumen-intima and media-adventitia interfaces in the posterior wall of the vessel to measure the IMT without user interaction. This means that the proposed strategies are suitable both for the diagnosis in daily clinical practice and to facilitate the development of studies with a large number of images. In particular, the evaluation process of IMT is carried out in three fully automatic stages. The first stage is a pre-processing of the ultrasound image in which the region of interest (ROI), i.e. the far-wall of the common carotid artery, is detected. Then, it proceeds to the identification of the interfaces defining the IMT. Finally, a post-processing stage debugs the results and defines the final contours on which IMT is evaluated. Two different proposals have been studied for the ROI detection: one based on Mathematical Morphology and the other based on Machine Learning. Once the ROI is detected, the segmentation of the lumen-intima interface and the media-adventitia interface is posed as a Pattern Recognition problem and it is solved by Machine Learning techniques. Thus, four different configurations have been developed by using distinct architectures, training algorithms, representations of input information and output space definitions. Therefore, segmentation is reduced to perform a classification of the pixels belonging to the ROI. The post-processing stage has been adapted to each one of the proposed segmentation strategies to detect and eliminate possible misclassifications in an automatic way. An important part of the present study is dedicated to the validation of the developed techniques. For this purpose, 79 images acquired with the same ultrasound equipment, but using different probes and different spatial resolutions, have been used. Two experts have performed the manual segmentation of all the images. Considering as ground-truth the average of four manual segmentations, two from each expert, the segmentation errors of the four different strategies have been evaluated. The validation process is completed with the comparison between automatic and manual IMT measurements. For an exhaustive characterization of the results, box plots, linear regression analysis, Bland-Altman plots and different statistical parameters have been used. Developed procedures have proven to be robust against noise and artifacts that may appear in the ultrasounds. They also adapt themselves to the anatomical and instrumental variability of the images, achieving a correct segmentation regardless of the appearance of the artery in the ultrasound. The obtained mean errors are similar, or even lower, than errors in other automatic and semi-automatic methods. Moreover, as a result of using learning machines, the segmentation process stands out for its computational efficiency.Programa de doctorado en Tecnologías de la Información y Comunicacione

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models

    Brain activity reconstruction from non-stationary M/EEG data using spatiotemporal constraints

    Get PDF
    Magneto/Electroencephalography (M/EEG)-based neuroimaging is a widely used noninvasive technique for functional analysis of neuronal activity. One of the most prominent advantages of using M/EEG measures is the very low implementation cost and its height temporal resolution. However, the number of locations measuring magnetic/electrical is relatively small (a couple of hundreds at best) while the discretized brain activity generators (sources) are several thousand. This fact corresponds an ill-posed mathematical problem commonly known as the M/EEG inverse problem. To solve such problems, additional information must be apriori assumed to obtain a unique and optimal solution. In the present work, a methodology to improve the accuracy and interpretability of the inverse problem solution is proposed, using physiologically motivated assumptions. Firstly, a method constraining the solution to a sparse representation in the space-time domain is introduce given a set of methodologies to syntonize the present parameters. Secondly, we propose a new source connectivity approach explicitly including spatiotemporal information of the neural activity extracted from M/EEG recordings. The proposed methods are compared with the state-of-art techniques in a simulated environment, and afterward, are validated using real-world data. In general, the contributed approaches are efficient and competitive compared to state-of-art brain mapping methodsResumen : El mapeo cerebral basado en señales de magneto/electroencefalografía (M/EEG), es una técnica muy usada para el análisis de la actividad neuronal en forma no invasiva. Una de las ventajas que provee la utilización de señales M/EEG es su bajo costo de implementación además de su sobresaliente resolución temporal. Sin embargo el número de posiciones magnéticas/eléctricas medidas son extremadamente bajas comparadas con la cantidad de puntos discretizados dentro del cerebro sobre los cuales se debe realizar la estimación de la actividad. Esto conlleva a un problema mal condicionado comúnmente conocido como el problema inverso de M/EEG. Para resolver este tipo de problemas, información apriori debe ser supuesta para así obtener una solución única y óptima. En el presente trabajo investigativo, se propone una metodología para mejorar la exactitud e interpretación a la solución del problema inverso teniendo en cuenta el contexto fisiológico del problema. En primer lugar se propone un algoritmo en el cual se representa la actividad cerebral a través de un conjunto de funciones espacio-temporales dando metodologías para sintonizar los parámetros presentes. En segundo lugar, proponemos un nuevo enfoque mediante conectividad en fuentes que explícitamente incluye información espacial y temporal de la actividad neuronal extraída del M/EEG. Los métodos propuestos son comparados con métodos del estado del arte usando señales simuladas, y finalmente son validados usando datos reales de M/EEG. En general, los métodos propuestos son eficientes y competitivos en comparación a los métodos de referenciaMaestrí

    Ambient Intelligence Environment for Home Cognitive Telerehabilitation

    Get PDF
    Higher life expectancy is increasing the number of age-related cognitive impairment cases. It is also relevant, as some authors claim, that physical exercise may be considered as an adjunctive therapy to improve cognition and memory after strokes. Thus, the integration of physical and cognitive therapies could offer potential benefits. In addition, in general these therapies are usually considered boring, so it is important to include some features that improve the motivation of patients. As a result, computer-assisted cognitive rehabilitation systems and serious games for health are more and more present. In order to achieve a continuous, efficient and sustainable rehabilitation of patients, they will have to be carried out as part of the rehabilitation in their own home. However, current home systems lack the therapist’s presence, and this leads to two major challenges for such systems. First, they need sensors and actuators that compensate for the absence of the therapist’s eyes and hands. Second, the system needs to capture and apply the therapist’s expertise. With this aim, and based on our previous proposals, we propose an ambient intelligence environment for cognitive rehabilitation at home, combining physical and cognitive activities, by implementing a Fuzzy Inference System (FIS) that gathers, as far as possible, the knowledge of a rehabilitation expert. Moreover, smart sensors and actuators will attempt to make up for the absence of the therapist. Furthermore, the proposed system will feature a remote monitoring tool, so that the therapist can supervise the patients’ exercises. Finally, an evaluation will be presented where experts in the rehabilitation field showed their satisfaction with the proposed system.This work was partially supported by Spanish Ministerio de Economía y Competitividad/FEDER under TIN2016-79100-R grant. Miguel Oliver holds an FPU scholarship (FPU13/03141) from the Spanish Government
    corecore