270 research outputs found
Establishing gold standard approaches to rapid tranquillisation: a review and discussion of the evidence on the safety and efficacy of medications currently used
Background: Rapid tranquillisation is used when control of agitation, aggression or excitement is required. Throughout the UK there is no consensus over the choice of drugs to be used as first line treatment. The NICE guideline on the management of violent behaviour involving psychiatric inpatients conducted a systematic examination of the literature relating to the effectiveness and safety of rapid tranquillisation (NICE, 2005). This paper presents the key findings from that review and key guideline recommendations generated, and discusses the implications for practice of more recent research and information.
Aims: To examine the evidence on the efficacy and safety of medications used for rapid tranquillisation in inpatient psychiatric settings.
Method: Systematic review of current guidelines and phase III randomised, controlled trials of medication used for rapid tranquillisation. Formal consensus methods were used to generate clinically relevant recommendations to support safe and effective prescribing of rapid tranquillisation in the development of a NICE guideline.
Findings: There is a lack of high quality clinical trial evidence in the UK and therefore a ‘gold standard’ medication regime for rapid tranquillisation has not been established.
Rapid tranquillisation and clinical practice: The NICE guideline produced 35 recommendations on rapid tranquillisation practice for the UK, with the primary aim of calming the service user to enable the use of psychosocial techniques.
Conclusions and implications for clinical practice: Further UK specific research is urgently needed that provides the clinician with a hierarchy of options for the clinical practice of rapid tranquillisation
Examining the reversibility of long-term behavioral disruptions in progeny of maternal SSRI exposure
Serotonergic dysregulation is implicated in numerous psychiatric disorders. Serotonin plays widespread trophic roles during neurodevelopment; thus perturbations to this system during development may increase risk for neurodevelopmental disorders. Epidemiological studies have examined association between selective serotonin reuptake inhibitor (SSRI) treatment during pregnancy and increased autism spectrum disorder (ASD) risk in offspring. It is unclear from these studies whether ASD susceptibility is purely related to maternal psychiatric diagnosis, or if treatment poses additional risk. We sought to determine whether maternal SSRI treatment alone or in combination with genetically vulnerable background was sufficient to induce offspring behavior disruptions relevant to ASD. We exposed C57BL/6J or Celf6(+/-) mouse dams to fluoxetine (FLX) during different periods of gestation and lactation and characterized offspring on tasks assessing social communicative interaction and repetitive behavior patterns including sensory sensitivities. We demonstrate robust reductions in pup ultrasonic vocalizations (USVs) and alterations in social hierarchy behaviors, as well as perseverative behaviors and tactile hypersensitivity. Celf6 mutant mice demonstrate social communicative deficits and perseverative behaviors, without further interaction with FLX. FLX re-exposure in adulthood ameliorates the tactile hypersensitivity yet exacerbates the dominance phenotype. This suggests acute deficiencies in serotonin levels likely underlie the abnormal responses to sensory stimuli, while the social alterations are instead due to altered development of social circuits. These findings indicate maternal FLX treatment, independent of maternal stress, can induce behavioral disruptions in mammalian offspring, thus contributing to our understanding of the developmental role of the serotonin system and the possible risks to offspring of SSRI treatment during pregnancy
Clinical Practice Guidelines for Recall and Maintenance of Patients with Tooth-Borne and Implant-Borne Dental Restorations
Purpose
To provide guidelines for patient recall regimen, professional maintenance regimen, and at-home maintenance regimen for patients with tooth-borne and implant-borne removable and fixed restorations.
Materials and Methods
The American College of Prosthodontists (ACP) convened a scientific panel of experts appointed by the ACP, American Dental Association (ADA), Academy of General Dentistry (AGD), and American Dental Hygienists Association (ADHA) who critically evaluated and debated recently published findings from two systematic reviews on this topic. The major outcomes and consequences considered during formulation of the clinical practice guidelines (CPGs) were risk for failure of tooth- and implant-borne restorations. The panel conducted a round table discussion of the proposed guidelines, which were debated in detail. Feedback was used to supplement and refine the proposed guidelines, and consensus was attained.
Results
A set of CPGs was developed for tooth-borne restorations and implant-borne restorations. Each CPG comprised (1) patient recall, (2) professional maintenance, and (3) at-home maintenance. For tooth-borne restorations, the professional maintenance and at-home maintenance CPGs were subdivided for removable and fixed restorations. For implant-borne restorations, the professional maintenance CPGs were subdivided for removable and fixed restorations and further divided into biological maintenance and mechanical maintenance for each type of restoration. The at-home maintenance CPGs were subdivided for removable and fixed restorations.
Conclusions
The clinical practice guidelines presented in this document were initially developed using the two systematic reviews. Additional guidelines were developed using expert opinion and consensus, which included discussion of the best clinical practices, clinical feasibility, and risk-benefit ratio to the patient. To the authors’ knowledge, these are the first CPGs addressing patient recall regimen, professional maintenance regimen, and at-home maintenance regimen for patients with tooth-borne and implant-borne restorations. This document serves as a baseline with the expectation of future modifications when additional evidence becomes available
A Systematic Review of Recall Regimen and Maintenance Regimen of Patients with Dental Restorations. Part 2: Implant-Borne Restorations
Purpose
To evaluate the current scientific evidence on patient recall and maintenance of implant-supported restorations, to standardize patient care regimens and improve maintenance of oral health. An additional purpose was to examine areas of deficiency in the current scientific literature and provide recommendations for future studies.
Materials and Methods
An electronic search for articles in the English language literature from the past 10 years was performed independently by multiple investigators using a systematic search process. After application of predetermined inclusion and exclusion criteria, the final list of articles was reviewed to meet the objectives of this review.
Results
The initial electronic search resulted in 2816 titles. The systematic application of inclusion and exclusion criteria resulted in 14 articles that satisfied the study objectives. An additional 6 articles were added through a supplemental search process for a total of 20 studies. Of these, 11 were randomized controlled clinical trials, and 9 were observational studies. The majority of the studies (15 out of 20) were conducted in the past 5 years and most studies were conducted in Europe (15), followed by Asia (2), South America (1), the United States (1), and the Middle East (1). Results from the qualitative data on a combined 1088 patients indicated that outcome improvements in recall and maintenance regimen were related to (1) patient/treatment characteristic (type of prosthesis, type of prosthetic components, and type of restorative materials); (2) specific oral topical agents or oral hygiene aids (electric toothbrush, interdental brush, chlorhexidine, triclosan, water flossers) and (3) professional intervention (oral hygiene maintenance, and maintenance of the prosthesis).
Conclusions
There is minimal evidence related to recall regimens in patients with implant-borne removable and fixed restorations; however, a considerable body of evidence indicates that patients with implant-borne removable and fixed restorations require lifelong professional recall regimens to provide biological and mechanical maintenance, customized for each patient. Current evidence also demonstrates that the use of specific oral topical agents and oral hygiene aids can improve professional and at-home maintenance of implant-borne restorations. There is evidence to demonstrate differences in mechanical and biological maintenance needs due to differences in prosthetic materials and designs. Deficiencies in existing evidence compel the forethought of creating clinical practice guidelines for recall and maintenance of patients with implant-borne dental restorations
Gulls as Sources of Environmental Contamination by Colistin-resistant Bacteria
In 2015, the mcr-1 gene was discovered in Escherichia coli in domestic swine in China that conferred resistance to colistin, an antibiotic of last resort used in treating multi-drug resistant bacterial infections in humans. Since then, mcr-1 was found in other human and animal populations, including wild gulls. Because gulls could disseminate the mcr-1 gene, we conducted an experiment to assess whether gulls are readily colonized with mcr-1 positive E. coli, their shedding patterns, transmission among conspecifics, and environmental deposition. Shedding of mcr-1 E. coli by small gull flocks followed a lognormal curve and gulls shed one strain \u3e101 log10 CFU/g in their feces for 16.4 days, which persisted in the environment for 29.3 days. Because gulls are mobile and can shed antimicrobial-resistant bacteria for extended periods, gulls may facilitate transmission of mcr-1 positive E. coli to humans and livestock through fecal contamination of water, public areas and agricultural operations
Functional strength training versus movement performance therapy for upper limb motor recovery early after stroke: a RCT
Background: Not all stroke survivors respond to the same form of physical therapy in the same way early after stroke. The response is variable and a detailed understanding of the interaction between specific physical therapies and neural structure and function is needed. Objectives: To determine if upper limb recovery is enhanced more by functional strength training (FST) than by movement performance therapy (MPT), to identify the differences in the neural correlates of response to (1) FST and (2) MPT and to determine whether or not pretreatment neural characteristics can predict recovery in response to (1) FST and (2) MPT. Design: Randomised, controlled, observer-blind, multicentre trial with embedded explanatory investigations. An independent facility used computer-generated randomisation for participants’ group allocation. Setting: In-patient rehabilitation, participants’ homes, university movement analysis facilities and NHS or university neuroimaging departments in the UK. Participants: People who were between 2 and 60 days after stroke in the territory of the anterior cerebral circulation, with some voluntary muscle contraction in the more affected upper limb but not full function. Interventions: Routine rehabilitation [conventional physical therapy (CPT)] plus either MPT or FST in equal doses during a 6-week intervention phase. FST was progressive resistive exercise provided during training of functional tasks. MPT was therapist ‘hands-on’ sensory input and guidance for production of smooth and accurate movement. Main outcomes: Action Research Arm Test (ARAT) score for clinical efficacy. Neural measures were made of corticocortical [fractional anisotropy (FA) from corpus callosum midline], corticospinal connectivity (asymmetry of corticospinal tracts FA) and resting motor threshold of paretic biceps brachii (pBB) and extensor carpi radialis muscles (derived from transcranial magnetic stimulation). Analysis: Change in ARAT scores were analysed using analysis of covariance models adjusted for baseline variables and randomisation strata. Correlation coefficients were calculated between change in neural measures and change in ARAT score per group and for the whole sample. An interaction term was calculated for each baseline neural measure and ARAT score change from baseline to outcome. Results: A total of 288 participants were randomised [mean age 72.2 (standard deviation 12.5) years; mean ARAT score of 25.5 (18.2); n = 283]. For the 240 participants with ARAT measurements at baseline and outcome, the mean change scores were FST + CPT = 9.70 (11.72) and MPT + CPT = 7.90 (9.18). The group difference did not reach statistical significance (least squares mean difference 1.35, 95% confidence interval –1.20 to 3.90; p = 0.298). Correlations between ARAT change scores and baseline neural values ranged from –0.147 (p = 0.385) for whole-sample corticospinal connectivity (n = 37) to 0.199 (p = 0.320) for MPT + CPT resting motor threshold pBB (n = 27). No statistically significant interaction effects were found between baseline neural variables and change in ARAT score. There were no differences between groups in adverse events. Limitations: The number of participants in the embedded explanatory investigation was lower than expected. Conclusions: The small difference in upper limb improvement in response to FST and MPT did not reach statistical significance. Baseline neural measures neither correlated with upper limb recovery nor predicted therapy response. Future work: Needs to continue investigation of the variability of response to specific physical therapies in people early after stroke. Trial registration: Current Controlled Trials ISRCTN19090862 and National Research Ethics Service reference number 11/EE/0524. Funding: This project was funded by the Efficacy and Mechanism Evaluation programme, a Medical Research Council and National Institute for Health Research partnership
The impact of an intervention to introduce malaria rapid diagnostic tests on fever case management in a high transmission setting in Uganda: A mixed-methods cluster-randomized trial (PRIME).
Rapid diagnostic tests for malaria (mRDTs) have been scaled-up widely across Africa. The PRIME study evaluated an intervention aiming to improve fever case management using mRDTs at public health centers in Uganda. A cluster-randomized trial was conducted from 2010-13 in Tororo, a high malaria transmission setting. Twenty public health centers were randomized in a 1:1 ratio to intervention or control. The intervention included training in health center management, fever case management with mRDTs, and patient-centered services; plus provision of mRDTs and artemether-lumefantrine (AL) when stocks ran low. Three rounds of Interviews were conducted with caregivers of children under five years of age as they exited health centers (N = 1400); reference mRDTs were done in children with fever (N = 1336). Health worker perspectives on mRDTs were elicited through semi-structured questionnaires (N = 49) and in-depth interviews (N = 10). The primary outcome was inappropriate treatment of malaria, defined as the proportion of febrile children who were not treated according to guidelines based on the reference mRDT. There was no difference in inappropriate treatment of malaria between the intervention and control arms (24.0% versus 29.7%, adjusted risk ratio 0.81 95\% CI: 0.56, 1.17 p = 0.24). Most children (76.0\%) tested positive by reference mRDT, but many were not prescribed AL (22.5\% intervention versus 25.9\% control, p = 0.53). Inappropriate treatment of children testing negative by reference mRDT with AL was also common (31.3\% invention vs 42.4\% control, p = 0.29). Health workers appreciated mRDTs but felt that integrating testing into practice was challenging given constraints on time and infrastructure. The PRIME intervention did not have the desired impact on inappropriate treatment of malaria for children under five. In this high transmission setting, use of mRDTs did not lead to the reductions in antimalarial prescribing seen elsewhere. Broader investment in health systems, including infrastructure and staffing, will be required to improve fever case management
Measuring The Evolutionary Rate Of Cooling Of ZZ Ceti
We have finally measured the evolutionary rate of cooling of the pulsating hydrogen atmosphere (DA) white dwarf ZZ Ceti (Ross 548), as reflected by the drift rate of the 213.13260694 s period. Using 41 yr of time-series photometry from 1970 November to 2012 January, we determine the rate of change of this period with time to be dP/dt = (5.2 +/- 1.4) x 10(-15) s s(-1) employing the O - C method and (5.45 +/- 0.79) x 10(-15) s s(-1) using a direct nonlinear least squares fit to the entire lightcurve. We adopt the dP/dt obtained from the nonlinear least squares program as our final determination, but augment the corresponding uncertainty to a more realistic value, ultimately arriving at the measurement of dP/dt = (5.5 +/- 1.0) x 10(-15) s s(-1). After correcting for proper motion, the evolutionary rate of cooling of ZZ Ceti is computed to be (3.3 +/- 1.1) x 10(-15) s s(-1). This value is consistent within uncertainties with the measurement of (4.19 +/- 0.73) x 10(-15) s s(-1) for another similar pulsating DA white dwarf, G 117-B15A. Measuring the cooling rate of ZZ Ceti helps us refine our stellar structure and evolutionary models, as cooling depends mainly on the core composition and stellar mass. Calibrating white dwarf cooling curves with this measurement will reduce the theoretical uncertainties involved in white dwarf cosmochronometry. Should the 213.13 s period be trapped in the hydrogen envelope, then our determination of its drift rate compared to the expected evolutionary rate suggests an additional source of stellar cooling. Attributing the excess cooling to the emission of axions imposes a constraint on the mass of the hypothetical axion particle.NSF AST-1008734, AST-0909107Norman Hackerman Advanced Research Program 003658-0252-2009Astronom
- …