213 research outputs found

    Machine Learning in Falls Prediction; A cognition-based predictor of falls for the acute neurological in-patient population

    Get PDF
    Background Information: Falls are associated with high direct and indirect costs, and significant morbidity and mortality for patients. Pathological falls are usually a result of a compromised motor system, and/or cognition. Very little research has been conducted on predicting falls based on this premise. Aims: To demonstrate that cognitive and motor tests can be used to create a robust predictive tool for falls. Methods: Three tests of attention and executive function (Stroop, Trail Making, and Semantic Fluency), a measure of physical function (Walk-12), a series of questions (concerning recent falls, surgery and physical function) and demographic information were collected from a cohort of 323 patients at a tertiary neurological center. The principal outcome was a fall during the in-patient stay (n = 54). Data-driven, predictive modelling was employed to identify the statistical modelling strategies which are most accurate in predicting falls, and which yield the most parsimonious models of clinical relevance. Results: The Trail test was identified as the best predictor of falls. Moreover, addition of any others variables, to the results of the Trail test did not improve the prediction (Wilcoxon signed-rank p < .001). The best statistical strategy for predicting falls was the random forest (Wilcoxon signed-rank p < .001), based solely on results of the Trail test. Tuning of the model results in the following optimized values: 68% (+- 7.7) sensitivity, 90% (+- 2.3) specificity, with a positive predictive value of 60%, when the relevant data is available. Conclusion: Predictive modelling has identified a simple yet powerful machine learning prediction strategy based on a single clinical test, the Trail test. Predictive evaluation shows this strategy to be robust, suggesting predictive modelling and machine learning as the standard for future predictive tools

    The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting

    Get PDF
    INTRODUCTION: There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting. METHODS: This was a prospective, cohort study conducted in a single centre in the UK. 114 participants consecutively admitted to a Neuro-Rehabilitation Unit were prospectively assessed for fall accidents. Baseline assessment included a measure of balance (Berg Balance Scale) and a battery of standard cognitive tests measuring executive function, speed of information processing, verbal and visual memory, visual perception and intellectual function. The outcomes of interest were the risk of becoming a faller, balance and fall rate. RESULTS: Two tests of executive function were significantly associated with fall risk, the Stroop Colour Word Test (IRR 1.01, 95% CI 1.00-1.03) and the number of errors on part B of the Trail Making Test (IRR 1.23, 95% CI 1.03-1.49). Composite scores of executive function, speed of information processing and visual memory domains resulted in 2 to 3 times increased likelihood of having better balance (OR 2.74 95% CI 1.08 to 6.94, OR 2.72 95% CI 1.16 to 6.36 and OR 2.44 95% CI 1.11 to 5.35 respectively). CONCLUSIONS: Our results show that specific subcomponents of executive functions are able to predict fall risk, while a more global cognitive dysfunction is associated with poorer balance

    Health related quality of life in COVID-19 survivors discharged from acute hospitals: results of a short-form 36-item survey [version 1; peer review: awaiting peer review]

    Get PDF
    Background: Health-related quality of life (HRQL) is important for evaluating the impact of a disease in the longer term across the physical and psychological domains of human functioning. The aim of this study is to evaluate HRQL in COVID-19 survivors in Italy using the short form 36-items questionnaire (SF-36). Methods: This is an observational study involving adults discharged home following a coronavirus disease 2019 (COVID-19)-related hospital admission. Baseline demographic and clinical data including the Cumulative Illness Rating Scale (CIRS) and the Hospital Anxiety and Depression Scale (HADS) were collected. The validated Italian version of SF-36 was administered cross-sectionally. The SF-36 contains eight scales measuring limitations in physical and social functioning, the impact on roles and activities, fatigue, emotional wellbeing, pain and general health perception. Results: A total of 35 patients, with a mean age of 60 years, completed the SF-36. The results showed difficulties across the physical and psychological domains, particularly affecting the return to previous roles and activities. A higher burden of co-morbidities as well as a more severe muscle weakness was associated to a lower physical functioning. Younger age, rather than older, correlated to a perceived greater limitation in physical functioning and vitality. Conclusions: COVID-19 survivors particularly the ones of working age may need support for resuming their premorbid level of functioning and returning to work

    Effects of Different Up-Dosing Regimens for Hymenoptera Venom Immunotherapy on Serum CTLA-4 and IL-10

    Get PDF
    BACKGROUND: Cytotoxic T lymphocyte associated antigen-4 (CTLA-4) is involved in the activation pathways of T lymphocytes. It has been shown that the circulating form of CTLA-4 is elevated in patients with hymenoptera allergy and can be down regulated by immunotherapy. OBJECTIVE: to assess the effects on CTLA-4 of venom immunotherapy, given with different induction protocols: conventional (6 weeks), rush (3 days) or ultra rush (1 day). METHODS: Sera from patients with hymenoptera allergy were collected at baseline and at the end of the induction phase. CTLA-4 and IL-10 were assayed in the same samples. A subset of patients were assayed also after 12 months of VIT maintenance. RESULTS: Ninety-four patients were studied. Of them, 50 underwent the conventional induction, 20 the rush and 24 the ultra-rush. Soluble CTLA-4 was detectable in all patients at baseline, and significantly decreased at the end of the induction, irrespective of its duration. Of note, a significant decrease of sCTLA-4 could be seen already at 24 hours. In parallel, IL-10 significantly increased at the end of the induction. At 12 months, sCTLA-4 remained low, whereas IL-10 returned to the baseline values. CONCLUSIONS: Serum CTLA4 is an early marker of the immunological effects of venom immunotherapy, and its changes persist after one year of maintenance treatment

    The Trail Making test: a study of its ability to predict falls in the acute neurological in-patient population

    Get PDF
    OBJECTIVE: To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. DESIGN: Prospective cohort study. SETTING: Tertiary neurological and neurosurgical center. SUBJECTS: In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. MAIN MEASURES: Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). RESULTS: The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. CONCLUSION: This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test

    Distributed physical sensors network for the protection of critical infrastractures against physical attacks

    Get PDF
    The SCOUT project is based on the use of multiple innovative and low impact technologies for the protection of space control ground stations and the satellite links against physical and cyber-attacks, and for intelligent reconfiguration of the ground station network (including the ground node of the satellite link) in the case that one or more nodes fail. The SCOUT sub-system devoted to physical attacks protection, SENSNET, is presented. It is designed as a network of sensor networks that combines DAB and DVB-T based passive radar, noise radar, Ku-band radar, infrared cameras, and RFID technologies. The problem of data link architecture is addressed and the proposed solution described

    Diagnostic performance and comparison of ultrasensitive and conventional rapid diagnostic test, thick blood smear and quantitative PCR for detection of low-density Plasmodium falciparum infections during a controlled human malaria infection study in Equatorial Guinea

    Get PDF
    BACKGROUND: Progress towards malaria elimination has stagnated, partly because infections persisting at low parasite densities comprise a large reservoir contributing to ongoing malaria transmission and are difficult to detect. This study compared the performance of an ultrasensitive rapid diagnostic test (uRDT) designed to detect low density infections to a conventional RDT (cRDT), expert microscopy using Giemsa-stained thick blood smears (TBS), and quantitative polymerase chain reaction (qPCR) during a controlled human malaria infection (CHMI) study conducted in malaria exposed adults (NCT03590340). METHODS: Blood samples were collected from healthy Equatoguineans aged 18-35 years beginning on day 8 after CHMI with 3.2 x 10(3) cryopreserved, infectious Plasmodium falciparum sporozoites (PfSPZ Challenge, strain NF54) administered by direct venous inoculation. qPCR (18s ribosomal DNA), uRDT (Alere Malaria Ag P.f.), cRDT [Carestart Malaria Pf/PAN (PfHRP2/pLDH)], and TBS were performed daily until the volunteer became TBS positive and treatment was administered. qPCR was the reference for the presence of Plasmodium falciparum parasites. RESULTS: 279 samples were collected from 24 participants; 123 were positive by qPCR. TBS detected 24/123 (19.5% sensitivity [95% CI 13.1-27.8%]), uRDT 21/123 (17.1% sensitivity [95% CI 11.1-25.1%]), cRDT 10/123 (8.1% sensitivity [95% CI 4.2-14.8%]); all were 100% specific and did not detect any positive samples not detected by qPCR. TBS and uRDT were more sensitive than cRDT (TBS vs. cRDT p = 0.015; uRDT vs. cRDT p = 0.053), detecting parasitaemias as low as 3.7 parasites/microL (p/microL) (TBS and uRDT) compared to 5.6 p/microL (cRDT) based on TBS density measurements. TBS, uRDT and cRDT did not detect any of the 70/123 samples positive by qPCR below 5.86 p/microL, the qPCR density corresponding to 3.7 p/microL by TBS. The median prepatent periods in days (ranges) were 14.5 (10-20), 18.0 (15-28), 18.0 (15-20) and 18.0 (16-24) for qPCR, TBS, uRDT and cRDT, respectively; qPCR detected parasitaemia significantly earlier (3.5 days) than the other tests. CONCLUSIONS: TBS and uRDT had similar sensitivities, both were more sensitive than cRDT, and neither matched qPCR for detecting low density parasitaemia. uRDT could be considered an alternative to TBS in selected applications, such as CHMI or field diagnosis, where qualitative, dichotomous results for malaria infection might be sufficient
    • …
    corecore