37 research outputs found

    Epizootic Rabbit Enteropathy (ERE): A Review of Current Knowledge

    Get PDF
    This literature review deals with Epizootic rabbit enteropathy (ERE), a condition which is potentially fatal to infected animals and continues to threaten the rabbit production industry internationally. The documented history of the condition is reviewed, together with what is known regarding the aetiology of the disease and candidate organisms which appear to be associated with its onset, although cannot be implicated as being the causal agent. Approaches to reduce the incidence of the condition (combining both husbandry practices and nutritional considerations), together with potential post-onset treatments and management strategies are also discussed

    Identification of undiagnosed atrial fibrillation patients using a machine learning risk prediction algorithm and diagnostic testing (PULsE-AI): Study protocol for a randomised controlled trial.

    Get PDF
    Atrial fibrillation (AF) is associated with an increased risk of stroke, enhanced stroke severity, and other comorbidities. However, AF is often asymptomatic, and frequently remains undiagnosed until complications occur. Current screening approaches for AF lack either cost-effectiveness or diagnostic sensitivity; thus, there is interest in tools that could be used for population screening. An AF risk prediction algorithm, developed using machine learning from a UK dataset of 2,994,837 patients, was found to be more effective than existing models at identifying patients at risk of AF. Therefore, the aim of the trial is to assess the effectiveness of this risk prediction algorithm combined with diagnostic testing for the identification of AF in a real-world primary care setting. Eligible participants (aged ≄30 years and without an existing AF diagnosis) registered at participating UK general practices will be randomised into intervention and control arms. Intervention arm participants identified at highest risk of developing AF (algorithm risk score ≄ 7.4%) will be invited for a 12‑lead electrocardiogram (ECG) followed by two-weeks of home-based ECG monitoring with a KardiaMobile device. Control arm participants will be used for comparison and will be managed routinely. The primary outcome is the number of AF diagnoses in the intervention arm compared with the control arm during the research window. If the trial is successful, there is potential for the risk prediction algorithm to be implemented throughout primary care for narrowing the population considered at highest risk for AF who could benefit from more intensive screening for AF. Trial Registration: NCT04045639

    Cardiovascular risk assessment scores for people with diabetes: a systematic review

    Get PDF
    People with type 2 diabetes have an increased risk of cardiovascular disease (CVD). Multivariate cardiovascular risk scores have been used in many countries to identify individuals who are at high risk of CVD. These risk scores include those originally developed in individuals with diabetes and those developed in a general population. This article reviews the published evidence for the performance of CVD risk scores in diabetic patients by: (1) examining the overall rationale for using risk scores; (2) systematically reviewing the literature on available scores; and (3) exploring methodological issues surrounding the development, validation and comparison of risk scores. The predictive performance of cardiovascular risk scores varies substantially between different populations. There is little evidence to suggest that risk scores developed in individuals with diabetes estimate cardiovascular risk more accurately than those developed in the general population. The inconsistency in the methods used in evaluation studies makes it difficult to compare and summarise the predictive ability of risk scores. Overall, CVD risk scores rank individuals reasonably accurately and are therefore useful in the management of diabetes with regard to targeting therapy to patients at highest risk. However, due to the uncertainty in estimation of true risk, care is needed when using scores to communicate absolute CVD risk to individuals

    Team Dynamics Theory: Nomological network among cohesion, team mental models, coordination, and collective efficacy

    Get PDF
    I put forth a theoretical framework, namely Team Dynamics Theory (TDT), to address the need for a parsimonious yet integrated, explanatory and systemic view of team dynamics. In TDT, I integrate team processes and outputs and explain their relationships within a systemic view of team dynamics. Specifically, I propose a generative nomological network linking cohesion, team mental models, coordination, collective efficacy, and team outcomes. From this nomological conceptualization, I illustrate how myriad alternative models can be derived to account for variance in different working teams, each comprised of unique members, and embedded in singular contexts. I outline TDT’s applied implications for team development, the enhancement of team functioning, and the profiling of team resilience. I conclude by discussing how TDT’s ontological and nomological propositions can be tested through various theoretical inquiries, methodological approaches, and intervention-based studies

    Predicting atrial fibrillation in primary care using machine learning

    No full text
    Background Atrial fibrillation (AF) is the most common sustained heart arrhythmia. However, as many cases are asymptomatic, a large proportion of patients remain undiagnosed until serious complications arise. Efficient, cost-effective detection of the undiagnosed may be supported by risk-prediction models relating patient factors to AF risk. However, there exists a need for an implementable risk model that is contemporaneous and informed by routinely collected patient data, reflecting the real-world pathology of AF. Methods This study sought to develop and evaluate novel and conventional statistical and machine learning models for risk-predication of AF. This was a retrospective, cohort study of adults (aged ≄30 years) without a history of AF, listed on the Clinical Practice Research Datalink, from January 2006 to December 2016. Models evaluated included published risk models (Framingham, ARIC, CHARGE-AF), machine learning models, which evaluated baseline and time-updated information (neural network, LASSO, random forests, support vector machines), and Cox regression. Results Analysis of 2,994,837 individuals (3.2% AF) identified time-varying neural networks as the optimal model achieving an AUROC of 0.827 vs. 0.725, with number needed to screen of 9 vs. 13 patients at 75% sensitivity, when compared with the best existing model CHARGE-AF. The optimal model confirmed known baseline risk factors (age, previous cardiovascular disease, antihypertensive medication usage) and identified additional time-varying predictors (proximity of cardiovascular events, body mass index (both levels and changes), pulse pressure, and the frequency of blood pressure measurements). Conclusion The optimal time-varying machine learning model exhibited greater predictive performance than existing AF risk models and reflected known and new patient risk factors for AF
    corecore