415 research outputs found
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature
Objective: To determine the completeness of argumentative steps necessary to
conclude effectiveness of an algorithm in a sample of current ML/AI supervised
learning literature.
Data Sources: Papers published in the Neural Information Processing Systems
(NeurIPS, n\'ee NIPS) journal where the official record showed a 2017 year of
publication.
Eligibility Criteria: Studies reporting a (semi-)supervised model, or
pre-processing fused with (semi-)supervised models for tabular data.
Study Appraisal: Three reviewers applied the assessment criteria to determine
argumentative completeness. The criteria were split into three groups,
including: experiments (e.g real and/or synthetic data), baselines (e.g
uninformed and/or state-of-art) and quantitative comparison (e.g. performance
quantifiers with confidence intervals and formal comparison of the algorithm
against baselines).
Results: Of the 121 eligible manuscripts (from the sample of 679 abstracts),
99\% used real-world data and 29\% used synthetic data. 91\% of manuscripts did
not report an uninformed baseline and 55\% reported a state-of-art baseline.
32\% reported confidence intervals for performance but none provided references
or exposition for how these were calculated. 3\% reported formal comparisons.
Limitations: The use of one journal as the primary information source may not
be representative of all ML/AI literature. However, the NeurIPS conference is
recognised to be amongst the top tier concerning ML/AI studies, so it is
reasonable to consider its corpus to be representative of high-quality
research.
Conclusion: Using the 2017 sample of the NeurIPS supervised learning corpus
as an indicator for the quality and trustworthiness of current ML/AI research,
it appears that complete argumentative chains in demonstrations of algorithmic
effectiveness are rare
Machine Learning in Falls Prediction; A cognition-based predictor of falls for the acute neurological in-patient population
Background Information: Falls are associated with high direct and indirect
costs, and significant morbidity and mortality for patients. Pathological falls
are usually a result of a compromised motor system, and/or cognition. Very
little research has been conducted on predicting falls based on this premise.
Aims: To demonstrate that cognitive and motor tests can be used to create a
robust predictive tool for falls.
Methods: Three tests of attention and executive function (Stroop, Trail
Making, and Semantic Fluency), a measure of physical function (Walk-12), a
series of questions (concerning recent falls, surgery and physical function)
and demographic information were collected from a cohort of 323 patients at a
tertiary neurological center. The principal outcome was a fall during the
in-patient stay (n = 54). Data-driven, predictive modelling was employed to
identify the statistical modelling strategies which are most accurate in
predicting falls, and which yield the most parsimonious models of clinical
relevance.
Results: The Trail test was identified as the best predictor of falls.
Moreover, addition of any others variables, to the results of the Trail test
did not improve the prediction (Wilcoxon signed-rank p < .001). The best
statistical strategy for predicting falls was the random forest (Wilcoxon
signed-rank p < .001), based solely on results of the Trail test. Tuning of the
model results in the following optimized values: 68% (+- 7.7) sensitivity, 90%
(+- 2.3) specificity, with a positive predictive value of 60%, when the
relevant data is available.
Conclusion: Predictive modelling has identified a simple yet powerful machine
learning prediction strategy based on a single clinical test, the Trail test.
Predictive evaluation shows this strategy to be robust, suggesting predictive
modelling and machine learning as the standard for future predictive tools
Unsteady Oscillatory Flow and Heat Transfer in a Horizontal Composite Porous Medium Channel
The problem of unsteady oscillatory flow and heat transfer in a horizontal composite porous medium is performed. The flow is modeled using the Darcy-Brinkman equation. The viscous and Darcian dissipation terms are also included in the energy equation. The partial differential equations governing the flow and heat transfer are solved analytically using two-term harmonic and non-harmonic functions in both regions of the channel. Effect of the physical parameters such as the porous medium parameter, ratio of viscosity, oscillation amplitude, conductivity ratio, Prandtl number and the Eckert number on the velocity and/or temperature fields are shown graphically. It is observed that both the velocity and temperature fields in the channel decrease as either of the porous medium parameter or the viscosity ratio increases while they increase with increases in the oscillation amplitude. Also, increasing the thermal conductivity ratio is found to suppress the temperature in both regions of the channel. The effects of the Prandtl and Eckert numbers are found to decrease the thermal state in the channel as well
Model updating after interventions paradoxically introduces bias
Machine learning is increasingly being used to generate prediction models for
use in a number of real-world settings, from credit risk assessment to clinical
decision support. Recent discussions have highlighted potential problems in the
updating of a predictive score for a binary outcome when an existing predictive
score forms part of the standard workflow, driving interventions. In this
setting, the existing score induces an additional causative pathway which leads
to miscalibration when the original score is replaced. We propose a general
causal framework to describe and address this problem, and demonstrate an
equivalent formulation as a partially observed Markov decision process. We use
this model to demonstrate the impact of such `naive updating' when performed
repeatedly. Namely, we show that successive predictive scores may converge to a
point where they predict their own effect, or may eventually tend toward a
stable oscillation between two values, and we argue that neither outcome is
desirable. Furthermore, we demonstrate that even if model-fitting procedures
improve, actual performance may worsen. We complement these findings with a
discussion of several potential routes to overcome these issues.Comment: Sections of this preprint on 'Successive adjuvancy' (section 4,
theorem 2, figures 4,5, and associated discussions) were not included in the
originally submitted version of this paper due to length. This material does
not appear in the published version of this manuscript, and the reader should
be aware that these sections did not undergo peer revie
The Trail Making test : a study of its ability to predict falls in the acute neurological in-patient population
Objective:
To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls.
Design:
Prospective cohort study.
Setting:
Tertiary neurological and neurosurgical center.
Subjects:
In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care.
Main Measures:
Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function).
Results:
The principal outcome was a fall during the in-patient stay (n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity.
Conclusion:
This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test
Recommended from our members
Neuropsychiatric disorders among Syrian and Iraqi refugees in Jordan: A retrospective cohort study 2012-2013
Background: The burden of neuropsychiatric disorders in refugees is likely high, but little has been reported on the neuropsychiatric disorders that affect Syrian and Iraqi refugees in a country of first asylum. This analysis aimed to study the cost and burden of neuropsychiatric disorders among refugees from Syria and Iraq requiring exceptional, United Nations-funded care in a country of first asylum. Methods: The United Nations High Commissioner for Refugees works with multi-disciplinary, in-country exceptional care committees to review refugees’ applications for emergency or exceptional medical care. Neuropsychiatric diagnoses among refugee applicants were identified through a retrospective review of applications to the Jordanian Exceptional Care Committee (2012-2013). Diagnoses were made using International Classification of Disease-10th edition codes rendered by treating physicians.Other Research Uni
- …