84 research outputs found

    Clinical Decision Support Systems with Game-based Environments, Monitoring Symptoms of Parkinson’s Disease with Exergames

    Get PDF
    Parkinson’s Disease (PD) is a malady caused by progressive neuronal degeneration, deriving in several physical and cognitive symptoms that worsen with time. Like many other chronic diseases, it requires constant monitoring to perform medication and therapeutic adjustments. This is due to the significant variability in PD symptomatology and progress between patients. At the moment, this monitoring requires substantial participation from caregivers and numerous clinic visits. Personal diaries and questionnaires are used as data sources for medication and therapeutic adjustments. The subjectivity in these data sources leads to suboptimal clinical decisions. Therefore, more objective data sources are required to better monitor the progress of individual PD patients. A potential contribution towards more objective monitoring of PD is clinical decision support systems. These systems employ sensors and classification techniques to provide caregivers with objective information for their decision-making. This leads to more objective assessments of patient improvement or deterioration, resulting in better adjusted medication and therapeutic plans. Hereby, the need to encourage patients to actively and regularly provide data for remote monitoring remains a significant challenge. To address this challenge, the goal of this thesis is to combine clinical decision support systems with game-based environments. More specifically, serious games in the form of exergames, active video games that involve physical exercise, shall be used to deliver objective data for PD monitoring and therapy. Exergames increase engagement while combining physical and cognitive tasks. This combination, known as dual-tasking, has been proven to improve rehabilitation outcomes in PD: recent randomized clinical trials on exergame-based rehabilitation in PD show improvements in clinical outcomes that are equal or superior to those of traditional rehabilitation. In this thesis, we present an exergame-based clinical decision support system model to monitor symptoms of PD. This model provides both objective information on PD symptoms and an engaging environment for the patients. The model is elaborated, prototypically implemented and validated in the context of two of the most prominent symptoms of PD: (1) balance and gait, as well as (2) hand tremor and slowness of movement (bradykinesia). While balance and gait affections increase the risk of falling, hand tremors and bradykinesia affect hand dexterity. We employ Wii Balance Boards and Leap Motion sensors, and digitalize aspects of current clinical standards used to assess PD symptoms. In addition, we present two dual-tasking exergames: PDDanceCity for balance and gait, and PDPuzzleTable for tremor and bradykinesia. We evaluate the capability of our system for assessing the risk of falling and the severity of tremor in comparison with clinical standards. We also explore the statistical significance and effect size of the data we collect from PD patients and healthy controls. We demonstrate that the presented approach can predict an increased risk of falling and estimate tremor severity. Also, the target population shows a good acceptance of PDDanceCity and PDPuzzleTable. In summary, our results indicate a clear feasibility to implement this system for PD. Nevertheless, long-term randomized clinical trials are required to evaluate the potential of PDDanceCity and PDPuzzleTable for physical and cognitive rehabilitation effects

    Hand tracking for clinical applications: validation of the Google MediaPipe Hand (GMH) and the depth-enhanced GMH-D frameworks

    Full text link
    Accurate 3D tracking of hand and fingers movements poses significant challenges in computer vision. The potential applications span across multiple domains, including human-computer interaction, virtual reality, industry, and medicine. While gesture recognition has achieved remarkable accuracy, quantifying fine movements remains a hurdle, particularly in clinical applications where the assessment of hand dysfunctions and rehabilitation training outcomes necessitate precise measurements. Several novel and lightweight frameworks based on Deep Learning have emerged to address this issue; however, their performance in accurately and reliably measuring fingers movements requires validation against well-established gold standard systems. In this paper, the aim is to validate the handtracking framework implemented by Google MediaPipe Hand (GMH) and an innovative enhanced version, GMH-D, that exploits the depth estimation of an RGB-Depth camera to achieve more accurate tracking of 3D movements. Three dynamic exercises commonly administered by clinicians to assess hand dysfunctions, namely Hand Opening-Closing, Single Finger Tapping and Multiple Finger Tapping are considered. Results demonstrate high temporal and spectral consistency of both frameworks with the gold standard. However, the enhanced GMH-D framework exhibits superior accuracy in spatial measurements compared to the baseline GMH, for both slow and fast movements. Overall, our study contributes to the advancement of hand tracking technology, the establishment of a validation procedure as a good-practice to prove efficacy of deep-learning-based hand-tracking, and proves the effectiveness of GMH-D as a reliable framework for assessing 3D hand movements in clinical applications

    Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review

    Get PDF
    The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT, the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.This work was supported in part by the FEDER/Ministerio de Ciencia, Innovación y Universidades;Agencia Estatal de Investigación, through the Smartlet Project under Grant TIN2017-85179-C3-1-R, and in part by the Madrid Regional Government through the e-Madrid-CM Project under Grant S2018/TCS-4307, a project which is co-funded by the European Structural Funds (FSE and FEDER). Partial support has also been received from the European Commission through Erasmus + Capacity Building in the Field of Higher Education projects, more specifically through projects LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), InnovaT (598758-EPP-1-2018-1-AT-EPPKA2-CBHE-JP), and PROF-XXI (609767-EPP-1-2019-1-ES-EPPKA2-CBHE-JP)

    GestureMoRo: an algorithm for autonomous mobile robot teleoperation based on gesture recognition

    Get PDF
    Gestures are a common way people communicate. Gesture-based teleoperation control systems tend to be simple to operate and suitable for most people’s daily use. This paper employed a LeapMotion sensor to develop a mobile robot control system based on gesture recognition, which mainly established connections through a client/server structure. The principles of gesture recognition in the system were studied and the relevant self-investigated algorithms—GestureMoRo, for the association between gestures and mobile robots were designed. Moreover, in order to avoid the unstably fluctuated movement of the mobile robot caused by palm shaking, the Gaussian filter algorithm was used to smooth and denoise the collected gesture data, which effectively improved the robustness and stability of the mobile robot’s locomotion. Finally, the teleoperation control strategy of the gesture to the WATER2 mobile robot was realized, and the effectiveness and practicability of the designed system were verified through multiple experiments

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Updates of Wearing Devices (WDs) In Healthcare, And Disease Monitoring

    Get PDF
     With the rising pervasiveness of growing populace, aging and chronic illnesses consistently rising medical services costs, the health care system is going through a crucial change from the conventional hospital focused system to an individual-focused system. Since the twentieth century, wearable sensors are becoming widespread in medical care and biomedical monitoring systems, engaging consistent estimation of biomarkers for checking of the diseased condition and wellbeing, clinical diagnostics and assessment in biological fluids like saliva, blood, and sweat. Recently, the improvements have been centered around electrochemical and optical biosensors, alongside advances with the non-invasive monitoring of biomarkers, bacteria and hormones, etc. Wearable devices have created with a mix of multiplexed biosensing, microfluidic testing and transport frameworks incorporated with flexible materials and body connections for additional created wear ability and effortlessness. These wearables hold guarantee and are fit for a higher understanding of the relationships between analyte focuses inside the blood or non-invasive biofluids and feedback to the patient, which is fundamentally significant in ideal finding, therapy, and control of diseases. In any case, cohort validation studies and execution assessment of wearable biosensors are expected to support their clinical acceptance. In the current review, we discussed the significance, highlights, types of wearables, difficulties and utilizations of wearable devices for biological fluids for the prevention of diseased conditions and real time monitoring of human wellbeing. In this, we sum up the different wearable devices that are developed for health care monitoring and their future potential has been discussed in detail

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Financiado para publicación en acceso aberto: Universidad de Granada / CBUA.[Abstract]: Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.Funding for open access charge: Universidad de Granada / CBUA. The work reported here has been partially funded by many public and private bodies: by the MCIN/AEI/10.13039/501100011033/ and FEDER “Una manera de hacer Europa” under the RTI2018-098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de Andalucia) and FEDER under CV20-45250, A-TIC-080-UGR18, B-TIC-586-UGR20 and P20-00525 projects, and by the Ministerio de Universidades under the FPU18/04902 grant given to C. Jimenez-Mesa, the Margarita-Salas grant to J.E. Arco, and the Juan de la Cierva grant to D. Castillo-Barnes. This work was supported by projects PGC2018-098813-B-C32 & RTI2018-098913-B100 (Spanish “Ministerio de Ciencia, Innovacón y Universidades”), P18-RT-1624, UMA20-FEDERJA-086, CV20-45250, A-TIC-080-UGR18 and P20 00525 (Consejería de econnomía y conocimiento, Junta de Andalucía) and by European Regional Development Funds (ERDF). M.A. Formoso work was supported by Grant PRE2019-087350 funded by MCIN/AEI/10.13039/501100011033 by “ESF Investing in your future”. Work of J.E. Arco was supported by Ministerio de Universidades, Gobierno de España through grant “Margarita Salas”. The work reported here has been partially funded by Grant PID2020-115220RB-C22 funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union” or by the “European Union NextGenerationEU/PRTR”. The work of Paulo Novais is financed by National Funds through the Portuguese funding agency, FCT - Fundaça̋o para a Ciência e a Tecnologia within project DSAIPA/AI/0099/2019. Ramiro Varela was supported by the Spanish State Agency for Research (AEI) grant PID2019-106263RB-I00. José Santos was supported by the Xunta de Galicia and the European Union (European Regional Development Fund - Galicia 2014–2020 Program), with grants CITIC (ED431G 2019/01), GPC ED431B 2022/33, and by the Spanish Ministry of Science and Innovation (project PID2020-116201GB-I00). The work reported here has been partially funded by Project Fondecyt 1201572 (ANID). The work reported here has been partially funded by Project Fondecyt 1201572 (ANID). In [247], the project has received funding by grant RTI2018-098969-B-100 from the Spanish Ministerio de Ciencia Innovación y Universidades and by grant PROMETEO/2019/119 from the Generalitat Valenciana (Spain). In [248], the research work has been partially supported by the National Science Fund of Bulgaria (scientific project “Digital Accessibility for People with Special Needs: Methodology, Conceptual Models and Innovative Ecosystems”), Grant Number KP-06-N42/4, 08.12.2020; EC for project CybSPEED, 777720, H2020-MSCA-RISE-2017 and OP Science and Education for Smart Growth (2014–2020) for project Competence Center “Intelligent mechatronic, eco- and energy saving sytems and technologies”BG05M2OP001-1.002-0023. The work reported here has been partially funded by the support of MICIN project PID2020-116346GB-I00. The work reported here has been partially funded by many public and private bodies: by MCIN/AEI/10.13039/501100011033 and “ERDF A way to make Europe” under the PID2020-115220RB-C21 and EQC2019-006063-P projects; by MCIN/AEI/10.13039/501100011033 and “ESF Investing in your future” under FPU16/03740 grant; by the CIBERSAM of the Instituto de Salud Carlos III; by MinCiencias project 1222-852-69927, contract 495-2020. The work is partially supported by the Autonomous Government of Andalusia (Spain) under project UMA18-FEDERJA-084, project name Detection of anomalous behavior agents by DL in low-cost video surveillance intelligent systems. Authors gratefully acknowledge the support of NVIDIA Corporation with the donation of a RTX A6000 48 Gb. This work was conducted in the context of the Horizon Europe project PRE-ACT, and it has received funding through the European Commission Horizon Europe Program (Grant Agreement number: 101057746). In addition, this work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract nummber 22 00058. S.B Cho was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).Junta de Andalucía; CV20-45250Junta de Andalucía; A-TIC-080-UGR18Junta de Andalucía; B-TIC-586-UGR20Junta de Andalucía; P20-00525Junta de Andalucía; P18-RT-1624Junta de Andalucía; UMA20-FEDERJA-086Portugal. Fundação para a Ciência e a Tecnologia; DSAIPA/AI/0099/2019Xunta de Galicia; ED431G 2019/01Xunta de Galicia; GPC ED431B 2022/33Chile. Agencia Nacional de Investigación y Desarrollo; 1201572Generalitat Valenciana; PROMETEO/2019/119Bulgarian National Science Fund; KP-06-N42/4Bulgaria. Operational Programme Science and Education for Smart Growth; BG05M2OP001-1.002-0023Colombia. Ministerio de Ciencia, Tecnología e Innovación; 1222-852-69927Junta de Andalucía; UMA18-FEDERJA-084Suíza. State Secretariat for Education, Research and Innovation; 22 00058Institute of Information & Communications Technology Planning & Evaluation (Corea del Sur); 2020-0-0136

    Computational approaches to Explainable Artificial Intelligence:Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.</p
    corecore