9,380 research outputs found

    Neuromechanical Biomarkers for Robotic Neurorehabilitation

    Get PDF
    : One of the current challenges for translational rehabilitation research is to develop the strategies to deliver accurate evaluation, prediction, patient selection, and decision-making in the clinical practice. In this regard, the robot-assisted interventions have gained popularity as they can provide the objective and quantifiable assessment of the motor performance by taking the kinematics parameters into the account. Neurophysiological parameters have also been proposed for this purpose due to the novel advances in the non-invasive signal processing techniques. In addition, other parameters linked to the motor learning and brain plasticity occurring during the rehabilitation have been explored, looking for a more holistic rehabilitation approach. However, the majority of the research done in this area is still exploratory. These parameters have shown the capability to become the "biomarkers" that are defined as the quantifiable indicators of the physiological/pathological processes and the responses to the therapeutical interventions. In this view, they could be finally used for enhancing the robot-assisted treatments. While the research on the biomarkers has been growing in the last years, there is a current need for a better comprehension and quantification of the neuromechanical processes involved in the rehabilitation. In particular, there is a lack of operationalization of the potential neuromechanical biomarkers into the clinical algorithms. In this scenario, a new framework called the "Rehabilomics" has been proposed to account for the rehabilitation research that exploits the biomarkers in its design. This study provides an overview of the state-of-the-art of the biomarkers related to the robotic neurorehabilitation, focusing on the translational studies, and underlying the need to create the comprehensive approaches that have the potential to take the research on the biomarkers into the clinical practice. We then summarize some promising biomarkers that are being under investigation in the current literature and provide some examples of their current and/or potential applications in the neurorehabilitation. Finally, we outline the main challenges and future directions in the field, briefly discussing their potential evolution and prospective

    EXplainable Artificial Intelligence: enabling AI in neurosciences and beyond

    Get PDF
    The adoption of AI models in medicine and neurosciences has the potential to play a significant role not only in bringing scientific advancements but also in clinical decision-making. However, concerns mounts due to the eventual biases AI could have which could result in far-reaching consequences particularly in a critical field like biomedicine. It is challenging to achieve usable intelligence because not only it is fundamental to learn from prior data, extract knowledge and guarantee generalization capabilities, but also to disentangle the underlying explanatory factors in order to deeply understand the variables leading to the final decisions. There hence has been a call for approaches to open the AI `black box' to increase trust and reliability on the decision-making capabilities of AI algorithms. Such approaches are commonly referred to as XAI and are starting to be applied in medical fields even if not yet fully exploited. With this thesis we aim at contributing to enabling the use of AI in medicine and neurosciences by taking two fundamental steps: (i) practically pervade AI models with XAI (ii) Strongly validate XAI models. The first step was achieved on one hand by focusing on XAI taxonomy and proposing some guidelines specific for the AI and XAI applications in the neuroscience domain. On the other hand, we faced concrete issues proposing XAI solutions to decode the brain modulations in neurodegeneration relying on the morphological, microstructural and functional changes occurring at different disease stages as well as their connections with the genotype substrate. The second step was as well achieved by firstly defining four attributes related to XAI validation, namely stability, consistency, understandability and plausibility. Each attribute refers to a different aspect of XAI ranging from the assessment of explanations stability across different XAI methods, or highly collinear inputs, to the alignment of the obtained explanations with the state-of-the-art literature. We then proposed different validation techniques aiming at practically fulfilling such requirements. With this thesis, we contributed to the advancement of the research into XAI aiming at increasing awareness and critical use of AI methods opening the way to real-life applications enabling the development of personalized medicine and treatment by taking a data-driven and objective approach to healthcare

    Remote sensing based assessment of fire severity

    Get PDF

    Comparative Analysis of the Implementation of Support Vector Machines and Long Short-Term Memory Artificial Neural Networks in Municipal Solid Waste Management Models in Megacities

    Full text link
    [EN] The development of methodologies to support decision-making in municipal solid waste (MSW) management processes is of great interest for municipal administrations. Artificial intelligence (AI) techniques provide multiple tools for designing algorithms to objectively analyze data while creating highly precise models. Support vector machines and neuronal networks are formed by AI applications offering optimization solutions at different managing stages. In this paper, an implementation and comparison of the results obtained by two AI methods on a solid waste management problem is shown. Support vector machine (SVM) and long short-term memory (LSTM) network techniques have been used. The implementation of LSTM took into account different configurations, temporal filtering and annual calculations of solid waste collection periods. Results show that the SVM method properly fits selected data and yields consistent regression curves, even with very limited training data, leading to more accurate results than those obtained by the LSTM method.Thanks are due to the Final Disposal Area of the Special Administrative Unit of Public Services of Bogota and the National Planning Department (DNP) for their support in providing data to perform this research.Solano-Meza, J.; Orjuela Yepes, D.; Rodrigo-Ilarri, J.; Rodrigo-Clavero, M. (2023). Comparative Analysis of the Implementation of Support Vector Machines and Long Short-Term Memory Artificial Neural Networks in Municipal Solid Waste Management Models in Megacities. International Journal of Environmental research and Public Health (Online). 20(5):1-21. https://doi.org/10.3390/ijerph2005425612120

    Standardization and Control for Confounding in Observational Studies: A Historical Perspective

    Full text link
    Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable. With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modelling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes. Since the mid 1990s it has become increasingly obvious that weighting methods are still often useful, sometimes even necessary. On this background we aim at describing the emergence of the modelling approach and the refinement of the weighting approach for confounder control.Comment: Published in at http://dx.doi.org/10.1214/13-STS453 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the Intersection of Explainable and Reliable AI for physical fatigue prediction

    Get PDF
    In the era of Industry 4.0, the use of Artificial Intelligence (AI) is widespread in occupational settings. Since dealing with human safety, explainability and trustworthiness of AI are even more important than achieving high accuracy. eXplainable AI (XAI) is investigated in this paper to detect physical fatigue during manual material handling task simulation. Besides comparing global rule-based XAI models (LLM and DT) to black-box models (NN, SVM, XGBoost) in terms of performance, we also compare global models with local ones (LIME over XGBoost). Surprisingly, global and local approaches achieve similar conclusions, in terms of feature importance. Moreover, an expansion from local rules to global rules is designed for Anchors, by posing an appropriate optimization method (Anchors coverage is enlarged from an original low value, 11%, up to 43%). As far as trustworthiness is concerned, rule sensitivity analysis drives the identification of optimized regions in the feature space, where physical fatigue is predicted with zero statistical error. The discovery of such “non-fatigue regions” helps certifying the organizational and clinical decision making

    DeepHeart: Semi-Supervised Sequence Learning for Cardiovascular Risk Prediction

    Full text link
    We train and validate a semi-supervised, multi-task LSTM on 57,675 person-weeks of data from off-the-shelf wearable heart rate sensors, showing high accuracy at detecting multiple medical conditions, including diabetes (0.8451), high cholesterol (0.7441), high blood pressure (0.8086), and sleep apnea (0.8298). We compare two semi-supervised train- ing methods, semi-supervised sequence learning and heuristic pretraining, and show they outperform hand-engineered biomarkers from the medical literature. We believe our work suggests a new approach to patient risk stratification based on cardiovascular risk scores derived from popular wearables such as Fitbit, Apple Watch, or Android Wear.Comment: Presented at AAAI 201
    • …
    corecore