476 research outputs found
Automation bias in electronic prescribing
© 2017 The Author(s). Background: Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. Methods: One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Results: Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. Conclusions: This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS
Inhibition of Tendon Cell Proliferation and Matrix Glycosaminoglycan Synthesis by Non-Steroidal Anti-Inflammatory Drugs in vitro
The purpose of this study was to investigate the effects of some commonly used non-steroidal anti-inflammatory drugs (NSAIDs) on human tendon. Explants of human digital flexor and patella tendons were cultured in medium containing pharmacological concentrations of NSAIDs. Cell proliferation was measured by incorporation of 3H-thymidine and glycosaminoglycan synthesis was measured by incorporation of 35S-Sulphate. Diclofenac and aceclofenac had no significant effect either on tendon cell proliferation or glycosaminoglycan synthesis. Indomethacin and naproxen inhibited cell proliferation in patella tendons and inhibited glycosaminoglycan synthesis in both digital flexor and patella tendons. If applicable to the in vivo situation, these NSAIDs should be used with caution in the treatment of pain after tendon injury and surgery
Associations between double-checking and medication administration errors: A direct observational study of paediatric inpatients
Background Double-checking the administration of medications has been standard practice in paediatric hospitals around the world for decades. While the practice is widespread, evidence of its effectiveness in reducing errors or harm is scarce. Objectives To measure the association between double-checking, and the occurrence and potential severity of medication administration errors (MAEs); check duration; and factors associated with double-checking adherence. Methods Direct observational study of 298 nurses, administering 5140 medication doses to 1523 patients, across nine wards, in a paediatric hospital. Independent observers recorded details of administrations and double-checking (independent; primed-one nurse shares information which may influence the checking nurse; incomplete; or none) in real time during weekdays and weekends between 07:00 and 22:00. Observational medication data were compared with patients' medical records by a reviewer (blinded to checking-status), to identify MAEs. MAEs were rated for potential severity. Observations included administrations where double-checking was mandated, or optional. Multivariable regression examined the association between double-checking, MAEs and potential severity; and factors associated with policy adherence. Results For 3563 administrations double-checking was mandated. Of these, 36 (1·0%) received independent double-checks, 3296 (92·5%) primed and 231 (6·5%) no/incomplete double-checks. For 1577 administrations double-checking was not mandatory, but in 26·3% (n=416) nurses chose to double-check. Where double-checking was mandated there was no significant association between double-checking and MAEs (OR 0·89 (0·65-1·21); p=0·44), or potential MAE severity (OR 0·86 (0·65-1·15); p=0·31). Where double-checking was not mandated, but performed, MAEs were less likely to occur (OR 0·71 (0·54-0·95); p=0·02) and had lower potential severity (OR 0·75 (0·57-0·99); p=0·04). Each double-check took an average of 6·4 min (107 hours/1000 administrations). Conclusions Compliance with mandated double-checking was very high, but rarely independent. Primed double-checking was highly prevalent but compared with single-checking conferred no benefit in terms of reduced errors or severity. Our findings raise questions about if, when and how double-checking policies deliver safety benefits and warrant the considerable resource investments required in modern clinical settings
Immune-mediated competition in rodent malaria is most likely caused by induced changes in innate immune clearance of merozoites
Malarial infections are often genetically diverse, leading to competitive interactions between parasites. A quantitative understanding of the competition between strains is essential to understand a wide range of issues, including the evolution of virulence and drug resistance. In this study, we use dynamical-model based Bayesian inference to investigate the cause of competitive suppression of an avirulent clone of Plasmodium chabaudi (AS) by a virulent clone (AJ) in immuno-deficient and competent mice. We test whether competitive suppression is caused by clone-specific differences in one or more of the following processes: adaptive immune clearance of merozoites and parasitised red blood cells (RBCs), background loss of merozoites and parasitised RBCs, RBC age preference, RBC infection rate, burst size, and within-RBC interference. These processes were parameterised in dynamical mathematical models and fitted to experimental data. We found that just one parameter μ, the ratio of background loss rate of merozoites to invasion rate of mature RBCs, needed to be clone-specific to predict the data. Interestingly, μ was found to be the same for both clones in single-clone infections, but different between the clones in mixed infections. The size of this difference was largest in immuno-competent mice and smallest in immuno-deficient mice. This explains why competitive suppression was alleviated in immuno-deficient mice. We found that competitive suppression acts early in infection, even before the day of peak parasitaemia. These results lead us to argue that the innate immune response clearing merozoites is the most likely, but not necessarily the only, mediator of competitive interactions between virulent and avirulent clones. Moreover, in mixed infections we predict there to be an interaction between the clones and the innate immune response which induces changes in the strength of its clearance of merozoites. What this interaction is unknown, but future refinement of the model, challenged with other datasets, may lead to its discovery
Male Wistar rats show individual differences in an animal model of conformity
Conformity refers to the act of changing one’s behaviour to match that of others. Recent studies in humans have shown that individual differences exist in conformity and that these differences are related to differences in neuronal activity. To understand the neuronal mechanisms in more detail, animal tests to assess conformity are needed. Here, we used a test of conformity in rats that has previously been evaluated in female, but not male, rats and assessed the nature of individual differences in conformity. Male Wistar rats were given the opportunity to learn that two diets differed in palatability. They were subsequently exposed to a demonstrator that had consumed the less palatable food. Thereafter, they were exposed to the same diets again. Just like female rats, male rats decreased their preference for the more palatable food after interaction with demonstrator rats that had eaten the less palatable food. Individual differences existed for this shift, which were only weakly related to an interaction between their own initial preference and the amount consumed by the demonstrator rat. The data show that this conformity test in rats is a promising tool to study the neurobiology of conformity
Effectiveness of an electronic patient-centred self-management tool for gout sufferers: A cluster randomised controlled trail protocol
© © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted. Introduction Gout is increasing despite effective therapies to lower serum urate concentrations to 0.36 mmol/L or less, which, if sustained, significantly reduces acute attacks of gout. Adherence to urate-lowering therapy (ULT) is poor, with rates of less than 50% 1 year after initiation of ULT. Attempts to increase adherence in gout patients have been disappointing. We aim to evaluate the effectiveness of use of a personal, self-management, a'smartphone' application (app) to achieve target serum urate concentrations in people with gout. We hypothesise that personalised feedback of serum urate concentrations will improve adherence to ULT. Methods and analysisSetting and design Primary care. A prospective, cluster randomised (by general practitioner (GP) practices), controlled trial. Participants GP practices will be randomised to either intervention or control clusters with their patients allocated to the same cluster. Intervention The intervention group will have access to the Healthy.me app tailored for the self-management of gout. The control group patients will have access to the same app modified to remove all functions except the Gout Attack Diary. Primary and secondary outcomes The proportion of patients whose serum urate concentrations are less than or equal to 0.36 mmol/L after 6 months. Secondary outcomes will be proportions of patients achieving target urate concentrations at 12 months, ULT adherence rates, serum urate concentrations at 6 and 12 months, rates of attacks of gout, quality of life estimations and process and economic evaluations. The study is designed to detect a ≥30% improvement in the intervention group above the expected 50% achievement of target serum urate at 6 months in the control group: power 0.80, significance level 0.05, assumed a'dropout' rate 20%. Ethics and dissemination This study has been approved by the University of New South Wales Human Research Ethics Committee. Study findings will be disseminated in international conferences and peer-reviewed journal. Trial registration number ACTRN12616000455460
Combining a leadership course and multi-source feedback has no effect on leadership skills of leaders in postgraduate medical education. An intervention study with a control group
<p>Abstract</p> <p>Background</p> <p>Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time.</p> <p>Methods</p> <p>Study participants were consultants responsible for postgraduate medical education at clinical departments. Study design: pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances.</p> <p>Results</p> <p>There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149).</p> <p>Conclusion</p> <p>The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting.</p
Co-designing a dashboard of predictive analytics and decision support to drive care quality and client outcomes in aged care: a mixed-method study protocol
IntroductionThere is a clear need for improved care quality and quality monitoring in aged care. Aged care providers collect an abundance of data, yet rarely are these data integrated and transformed in real-time into actionable information to support evidence-based care, nor are they shared with older people and informal caregivers. This protocol describes the co-design and testing of a dashboard in residential aged care facilities (nursing or care homes) and community-based aged care settings (formal care provided at home or in the community). The dashboard will comprise integrated data to provide an 'at-a-glance' overview of aged care clients, indicators to identify clients at risk of fall-related hospitalisations and poor quality of life, and evidence-based decision support to minimise these risks. Longer term plans for dashboard implementation and evaluation are also outlined.MethodsThis mixed-method study will involve (1) co-designing dashboard features with aged care staff, clients, informal caregivers and general practitioners (GPs), (2) integrating aged care data silos and developing risk models, and (3) testing dashboard prototypes with users. The dashboard features will be informed by direct observations of routine work, interviews, focus groups and co-design groups with users, and a community forum. Multivariable discrete time survival models will be used to develop risk indicators, using predictors from linked historical aged care and hospital data. Dashboard prototype testing will comprise interviews, focus groups and walk-through scenarios using a think-aloud approach with staff members, clients and informal caregivers, and a GP workshop.Ethics and disseminationThis study has received ethical approval from the New South Wales (NSW) Population & Health Services Research Ethics Committee and Macquarie University's Human Research Ethics Committee. The research findings will be presented to the aged care provider who will share results with staff members, clients, residents and informal caregivers. Findings will be disseminated as peer-reviewed journal articles, policy briefs and conference presentations
Prospective object search in dogs: mixed evidence for knowledge of What and Where
We investigated whether two dogs that had been specially trained to retrieve objects by their names were able to integrate information about the identity (What) as well as the location (Where) of those objects so that they could plan their search accordingly. In a first study, two sets of objects were placed in two separate rooms and subjects were asked to retrieve the objects, one after the other. Both dogs remembered the identity of the objects as they reliably retrieved the correct objects. One of the dogs was also able to integrate information about the object’s location as he chose the correct location in which the object had been placed. Further investigation of the second dog’s behavior revealed that she followed a more stereotyped search strategy. Despite this variation in performance, this study provides evidence for the memory of What and Where in a domestic dog and shows the prospective use of such information in a search task
Pain control after total knee arthroplasty: a randomized trial comparing local infiltration anesthesia and continuous femoral block
Local infiltration analgesia (LIA) is a new multimodal wound infiltration method. It
has attracted growing interest in recent years and is widely used all over the world for
treating postoperative pain after knee and hip arthroplasty. This method is based on
systematic infiltration of a mixture of ropivacaine, a long acting local anesthetic,
ketorolac, a cyclooxygenase inhibitor (NSAID), and adrenalin around all structures
subject to surgical trauma inknee and hip arthroplasty.
Two patient cohorts of 40 patients scheduled for elective total knee arthroplasty
(TKA) and 15 patients scheduled for total hip arthroplasty (THA) contributed to the
work presented in this thesis. In a randomized trial the efficacy of LIA in TKA with
regard to pain at rest and upon movement was compared to femoral block. Both
methods result in a high quality pain relief and similar morphine consumption, but
fewer patients in the LIA group reported pain of 7/10 on any occasion during the 24 h
monitoring period (paper I).
In the same patient cohort the maximal total plasma concentration of ropivacaine was
below the established toxic threshold for most patients although a few reached
potentially toxic concentrations of 1.4-1.7 mg/L. The time to maximal detected
plasma concentration was around 4-6 h after release of tourniquet in TKA (paper II).
All patients in the THA cohort were subjected to the routine LIA protocol. In these
patients both the total and unbound plasma concentration of ropivacaine was
determined. The concentration was below the established toxic threshold. As
ropivacaine binds to a-1 acid glycoprotein(AAG) we assessed the possibility that
increased AAG may decrease the unbound concentration of ropivacaine. A40 %
increase in AAG was detected during the first 24 h after surgery, however the
fraction of unbound ropivacaine remained the same. There was a trend towards
increased C max of ropivacaine with increasing age and decreasing creatinine
clearance but the statistical power was too low to draw any conclusion (paper III).
Administration of 30mg ketorolac according to the LIA protocol both in TKA and
THA resulted in a similar Cmax as previously reported after 10 mg intramuscular
ketorolac (paper II, paper IV). Neither age, nor body weight or BMI, nor creatinine
clearance, correlates to maximal ketorolac plasma concentration or total exposure to
ketorolac (AUC) (paper IV).
In conclusion, LIA provides good postoperative analgesia which is similar to femoral
block after total knee arthroplasty. The plasma concentration of ropivacaine seems to
be below toxic levels in most TKA patients. The unbound plasma concentration of
ropivcaine in THA seems to be below the toxic level.
The use of ketorolac in LIA may not be safer than other routes of administration, and
similar restrictions should be applied in patients at risk of developing side effects
- …