166 research outputs found
ENHANCING CLINICAL DECISION-MAKING IN LOW-RESOURCE SETTINGS: COMPARING MORTALITY RISK SCORES FOR ADULT CRITICAL CARE PATIENTS IN LESOTHO DURING THE COVID-19 PANDEMIC
Abstract
Introduction:
Lesotho faced challenges amid the COVID-19 pandemic. Mortality or severe illness risk scores offer potential in aiding patient triage and resource allocation. Our study aims to evaluate the performance of these scores in Lesotho's COVID-19 context. By investigating factors distinguishing mortality from survival and assessing score effectiveness, we seek to address the gap in understanding their applicability in low-resource settings during the pandemic.
Methods:
Berea and Mafeteng hospitals were the main COVID-19 treatment centers during the pandemic in Lesotho. This retrospective cohort study focused on adult critical care admissions and predicting their survival outcomes using mortality risk scores. Logistic regression analyzed odds of death by clinical features and three mortality risk scores: Universal Vital Assessment (UVA)1, Modified Early Warning Score (MEWS)2, and Mortality Probability Admission Model (MPM)3. Predicted probabilities and ROC curves evaluated model performance, with optimal thresholds determined for classification.
Results:
From March 2020 to May 2022, 1,426 patients were received, with 449 deaths recorded. About 59% of all COVID tests were positive (n=844), 23% were suspected (n=330) and the remaining were negative (18%, n=252). UVA’s high-risk category had greater odds of death than the medium risk category (aOR: 2.81, 95% CI: 6.01, 13.87, aOR: 1.82 95% CI: 2.14, 3.97,
respectively). MEWS and mMPM24 scores showed increased mortality odds in high versus low categories (aOR: 1.78; 95% CI: 1.26, 2.57 and aOR: 2.31 95% CI: 1.3, 4.0, respectively). All three mortality risk scores had poor discrimination (mMPM24, C-statistic: 0.55; UVA, C-statistic: 0.65, MEWS: C-statistic: 0.53). UVA exhibited consistency in mortality prediction across all COVID-19 statuses, with most patients falling under medium risk (n= 860), unlike MEWS and mMPM24, where majority were at high risk (n= 1228 and n= 1224, respectively).
Conclusion:
Amid COVID-19 waves, these scores can help guide interventions for those most in need. However, their utility depends on continuous validation efforts. By recognizing their role in stratifying risk and addressing inherent limitations, due to data availability or score accuracy, these tools can be used to efficiently distribute constrained resources
Deciding on appropriate use of force: human-machine interaction in weapons systems and emerging norms
This article considers the role of norms in the debate on autonomous weapons systems (AWS). It argues that the academic and political discussion is largely dominated by considerations of how AWS relate to norms institutionalised in international law. While this debate on AWS has produced insights on legal and ethical norms and sounded options of a possible regulation or ban, it neglects to investigate how complex human-machine interactions in weapons systems can set standards of appropriate use of force, which are politically-normatively relevant but take place outside of formal, deliberative law-setting. While such procedural norms are already emerging in the practice of contemporary warfare, the increasing technological complexity of AI-driven weapons will add to their political-normative relevance. I argue that public deliberation about and political oversight and accountability of the use of force is at risk of being consumed and normalised by functional procedures and perceptions. This can have a profound impact on future of remote-warfare and security policy
Recommended from our members
Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
The interest in explainable artificial intelligence has grown strongly in recent years because of the need to convey safety and trust in the ‘how’ and ‘why’ of automated decision-making to users. While a plethora of approaches has been developed, only a few focus on how to use domain knowledge and how this influences the understanding of explanations by users. In this paper, we show that by using ontologies we can improve the human understandability of global post-hoc explanations, presented in the form of decision trees. In particular, we introduce Trepan Reloaded, which builds on Trepan, an algorithm that extracts surrogate decision trees from black-box models. Trepan Reloaded includes ontologies, that model domain knowledge, in the process of extracting explanations to improve their understandability. We tested the understandability of the extracted explanations by humans in a user study with four different tasks. We evaluate the results in terms of response times and correctness, subjective ease of understanding and confidence, and similarity of free text responses. The results show that decision trees generated with Trepan Reloaded, taking into account domain knowledge, are significantly more understandable throughout than those generated by standard Trepan. The enhanced understandability of post-hoc explanations is achieved with little compromise on the accuracy with which the surrogate decision trees replicate the behaviour of the original neural network models
Coh-Metrix: Analysis of text on cohesion and language
Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics
Efek Preventif Kefir Pada Mencit (Musmusculus) Balb-C Yang Diinduksi Ovalbumin Terhadap Kadar Relatifsel T Cd4 Dan Il-10 Pada Organ Limpa
Alergi adalah suatu respon tubuh yang berlebihan terhadap antigen
(alergen) dan termasuk salah satu reaksi hipersensitivitas tipe 1 yang diperantarai
oleh sel limfosit dengan protein terlarut sitokin. Upaya pencegahan alergi dapat
dilakukan dengan pemberian kefir karena kefir mengandung bakteri asam laktat,
khamir dan bioaktif peptida.Penelitian ini menggunakan metode eksperimental
laboratorikpost control onlydesign dengan metode Rancangan Acak Lengkap
(RAL). Mencit balb/c dibagi menjadi 5 kelompok dengan 4 ulangan. Kelompok
kontrol (-) adalah mencit sehat diberikan placebo NaCl fisiologis per oral
(0,5ml/ekor) pada hari ke 8-21. Kelompok kontrol (+), P1, P2 dan P3 diberikan
ovalbumin dengan dosis 20 μg/ekor secara IP pada hari ke-15 dan 22 selanjutnya
diinduksi kembali OVA pada hari ke-29per oral (60 mg/ekor). Kelompok P1, P2
dan P3 diberikan preventif kefir dengan dosis bertingkat yaitu 300 mg/kgBB, 600
mg/kgBB,dan 900 mg/kgBB pada hari ke 8-21. Nekropsi dilakukan pada hari ke
30 dan diambil organ limpa untuk menghitung kadar relatif sel T CD4 dan IL-10
dengan flowcytometry. Data hasil flowcytometry bersifat kuantitatif selanjutnya
dianalisis menggunakan ujiOne Way ANOVA dan dilanjutkan dengan uji Tukey
pada α= 0.05. Pada penelitian ini, pemberian kefir pada dosis 900 mg/Kg BB
selama 14 hari merupakan dosis efektif dalam menurunkan kadar relatif sel T
CD4 dan IL-10 pada mencit BALB-c yang diinduksi ovalbumin. Kesimpulan
dalam penelitian ini adalah kefir dapat menurunkan reaksi inflamasi akibat
induksi ovalbumin
- …