183 research outputs found
Anti-nausea effects and pharmacokinetics of ondansetron, maropitant and metoclopramide in a low-dose cisplatin model of nausea and vomiting in the dog: a blinded crossover study
Nausea is a subjective sensation which is difficult to measure in non-verbal species. The aims of this study were to determine the efficacy of three classes of antiemetic drugs in a novel low dose cisplatin model of nausea and vomiting and measure change in potential nausea biomarkers arginine vasopressin (AVP) and cortisol. A four period cross-over blinded study was conducted in eight healthy beagle dogs of both genders. Dogs were administered 18 mg/m2 cisplatin intravenously, followed 45 min later by a 15 min infusion of either placebo (saline) or antiemetic treatment with ondansetron (0.5 mg/kg; 5-HT3 antagonist), maropitant (1 mg/kg; NK1 antagonist) or metoclopramide (0.5 mg/kg; D2 antagonist). The number of vomits and nausea associated behaviours, scored on a visual analogue scale, were recorded every 15 min for 8 h following cisplatin administration. Plasma samples were collected to measure AVP, cortisol and antiemetic drug concentrations
MICU2, a Paralog of MICU1, Resides within the Mitochondrial Uniporter Complex to Regulate Calcium Handling
Mitochondrial calcium uptake is present in nearly all vertebrate tissues and is believed to be critical in shaping calcium signaling, regulating ATP synthesis and controlling cell death. Calcium uptake occurs through a channel called the uniporter that resides in the inner mitochondrial membrane. Recently, we used comparative genomics to identify MICU1 and MCU as the key regulatory and putative pore-forming subunits of this channel, respectively. Using bioinformatics, we now report that the human genome encodes two additional paralogs of MICU1, which we call MICU2 and MICU3, each of which likely arose by gene duplication and exhibits distinct patterns of organ expression. We demonstrate that MICU1 and MICU2 are expressed in HeLa and HEK293T cells, and provide multiple lines of biochemical evidence that MCU, MICU1 and MICU2 reside within a complex and cross-stabilize each other's protein expression in a cell-type dependent manner. Using in vivo RNAi technology to silence MICU1, MICU2 or both proteins in mouse liver, we observe an additive impairment in calcium handling without adversely impacting mitochondrial respiration or membrane potential. The results identify MICU2 as a new component of the uniporter complex that may contribute to the tissue-specific regulation of this channel.National Institutes of Health (U.S.) (GM0077465)National Institutes of Health (U.S.) (DK080261
Accuracy of monitors used for blood pressure checks in English retail pharmacies::a cross-sectional observational study
BACKGROUND: Free blood pressure (BP) checks offered by community pharmacies provide a potentially useful opportunity to diagnose and/or manage hypertension, but the accuracy of the sphygmomanometers in use is currently unknown. AIM: To assess the accuracy of validated automatic BP monitors used for BP checks in a UK retail pharmacy chain. DESIGN AND SETTING: Cross-sectional, observational study in 52 pharmacies from one chain in a range of locations (inner city, suburban, and rural) in central England. METHOD: Monitor accuracy was compared with a calibrated reference device (Omron PA-350), at 50 mmHg intervals across the range 0–300 mmHg (static pressure test), with a difference from the reference monitor of +/− 3 mmHg at any interval considered a failure. The results were analysed by usage rates and length of time in service. RESULTS: Of 61 BP monitors tested, eight (13%) monitors failed (that is, were >3 mmHg from reference), all of which underestimated BP. Monitor failure rate from the reference monitor of +/− 3 mmHg at any testing interval varied by length of time in use (2/38, 5% <18 months; 4/14, 29% >18 months, P = 0.038) and to some extent, but non-significantly, by usage rates (4/22, 18% in monitors used more than once daily; 2/33, 6% in those used less frequently, P = 0.204). CONCLUSION: BP monitors within a pharmacy setting fail at similar rates to those in general practice. Annual calibration checks for blood pressure monitors are needed, even for new monitors, as these data indicate declining performance from 18 months onwards
A critical evaluation of predictive models for rooted soil strength with application to predicting the seismic deformation of rooted slopes
This paper presents a comparative study of three different classes of model for estimating the reinforcing effect of plant roots in soil, namely (i) fibre pull-out model, (ii) fibre break models (including Wu and Waldron’s Model (WWM) and the Fibre Bundle Model (FBM)) and (iii) beam bending or p-y models (specifically Beam on a Non-linear Winkler-Foundation (BNWF) models). Firstly, the prediction model of root reinforcement based on pull-out being the dominant mechanism for different potential slip plane depths was proposed. The resulting root reinforcement calculated were then compared with those derived from the other two types of models. The estimated rooted soil strength distributions were then incorporated within a fully dynamic, plane-strain continuum finite element model to assess the consequences of the selection of rooted soil strength model on the global seismic stability of a vegetated slope (assessed via accumulated slip during earthquake shaking). For the particular case considered in this paper (no roots were observed to have broken after shearing), root cohesion predicted by the pull-out model is much closer to that the BNWF model, but is largely over-predicted by the family of fibre break models. In terms of the effects on the stability of vegetated slopes, there exists a threshold value beyond which the position of the critical slip plane would bypass the rooted zones, rather than passing through them. Further increase of root cohesion beyond this value has minimal effect on the global slope behaviour. This implies that significantly over-predicted root cohesion from fibre break models when used to model roots with non-negligible bending stiffness may still provide a reasonable prediction of overall behaviour, so long as the critical failure mechanism is already bypassing the root-reinforced zones. © 2019, The Author(s)
Leaf colour as a signal of chemical defence to insect herbivores in wild cabbage (Brassica Oleracea)
Leaf colour has been proposed to signal levels of host defence to insect herbivores, but we lack data on herbivory, leaf colour and levels of defence for wild host populations necessary to test this hypothesis. Such a test requires measurements of leaf spectra as they would be sensed by herbivore visual systems, as well as simultaneous measurements of chemical defences and herbivore responses to leaf colour in natural host-herbivore populations. In a large-scale field survey of wild cabbage (Brassica oleracea) populations, we show that variation in leaf colour and brightness, measured according to herbivore spectral sensitivities, predicts both levels of chemical defences (glucosinolates) and abundance of specialist lepidopteran (Pieris rapae) and hemipteran (Brevicoryne brassicae) herbivores. In subsequent experiments, P. rapae larvae achieved faster growth and greater pupal mass when feeding on plants with bluer leaves, which contained lower levels of aliphatic glucosinolates. Glucosinolate-mediated effects on larval performance may thus contribute to the association between P. rapae herbivory and leaf colour observed in the field. However, preference tests found no evidence that adult butterflies selected host plants based on leaf coloration. In the field, B. brassicae abundance varied with leaf brightness but greenhouse experiments were unable to identify any effects of brightness on aphid preference or performance. Our findings suggest that although leaf colour reflects both levels of host defences and herbivore abundance in the field, the ability of herbivores to respond to colour signals may be limited, even in species where performance is correlated with leaf colour
Association of guideline and policy changes with incidence of lifestyle advice and treatment for uncomplicated mild hypertension in Primary Care: a longitudinal cohort study in the Clinical Practice Research Datalink
Objectives Evidence to support initiation of pharmacological treatment in patients with uncomplicated (low risk) mild hypertension is inconclusive. As such, clinical guidelines are contradictory and healthcare policy has changed regularly. The aim of this study was to determine the incidence of lifestyle advice and drug therapy in this population and whether secular trends were associated with policy changes. Design Longitudinal cohort study. Setting Primary care practices contributing to the Clinical Practice Research Datalink in England. Participants Data were extracted from the linked electronic health records of patients aged 18–74 years, with stage 1 hypertension (blood pressure between 140/90 and 159/99 mm Hg), no cardiovascular disease (CVD) risk factors and no treatment, from 1998 to 2015. Patients exited if follow-up records became unavailable, they progressed to stage 2 hypertension, developed a CVD risk factor or received lifestyle advice/treatment. Primary outcome measures The association between policy changes and incidence of lifestyle advice or treatment, examined using an interrupted time-series analysis. Results A total of 108 843 patients were defined as having uncomplicated mild hypertension (mean age 51.9±12.9 years, 60.0% female). Patientsspent a median 2.6 years (IQR 0.9–5.5) in the study, after which 12.2% (95% CI 12.0% to 12.4%) were given lifestyle advice, 29.9% (95% CI 29.7% to 30.2%) were prescribed medication and 19.4% (95% CI 19.2% to 19.6%) were given both. The introduction of the quality outcomes framework (QOF) and subsequent changes to QOF indicators were followed by significant increases in the incidence of lifestyle advice. Treatment prescriptions decreased slightly over time, but were not associated with policy changes. Conclusions Despite secular trends that accord with UK guidance, many patients are still prescribed treatment for mild hypertension. Adequately powered studies are needed to determine if this is appropriate
Accuracy of blood pressure monitors owned by patients with hypertension (ACCU-RATE study)
Background
Home blood pressure (BP) monitoring is recommended in guidelines and increasingly popular with patients and health care professionals, but the accuracy of patients’ own monitors in real world use is not known.
Aim
To assess the accuracy of home BP monitors used by people with hypertension, and investigate factors affecting accuracy.
Design and Setting
Patients on the hypertension register at seven practices in central England were surveyed to ascertain if they owned a monitor and wanted it tested.
Method
Monitor accuracy was compared to a calibrated reference device, at 50 mmHg intervals between 0-280/300 mmHg (static pressure test), with a difference from the reference monitor of +/-3 mmHg at any interval considered a failure. Cuff performance was also assessed. Results were analysed by usage rates, length of time in service, make and model, monitor validation status, cost, and any previous testing.
Results
251 (76%, 95% CI 71-80%) of 331 tested devices passed all tests (monitors and cuffs) and 86% passed the static pressure test, deficiencies primarily due to overestimation. 40% of testable monitors were unvalidated. Pass rate on the static pressure test was greater in validated monitors (96% [95% CI 94-98%] vs 64% [95% CI 58-69%]), those retailing for over £10, and those in use for less than four years.12% of cuffs failed.
Conclusion
Patients’ own BP monitor failure rate was similar to that in studies performed in professional settings, though cuff failure was more frequent. Clinicians can be confident of the accuracy of patients’ own BP monitors, if validated and less than five years old.This work represents independent research commissioned by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research funding scheme (RP-PG-1209-10051). The views expressed in this study are those of the authors and not necessarily of the NHS, the NIHR or the Department of Health. RJM was supported by an NIHR Professorship (NIHR-RP-02-12-015) and by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Oxford at Oxford Health NHS Foundation Trust. FDRH is part funded as Director of the National Institute for Health Research (NIHR) School for Primary Care Research (SPCR), Theme Leader of the NIHR Oxford Biomedical Research Centre (BRC), and Director of the NIHR CLAHRC Oxford. JM is an NIHR Senior Investigator. No funding for this study was received from any monitor manufacturer
Impact of Changes to National Hypertension Guidelines on Hypertension Management and Outcomes in the United Kingdom.
In recent years, national and international guidelines have recommended the use of out-of-office blood pressure monitoring for diagnosing hypertension. Despite evidence of cost-effectiveness, critics expressed concerns this would increase cardiovascular morbidity. We assessed the impact of these changes on the incidence of hypertension, out-of-office monitoring and cardiovascular morbidity using routine clinical data from English general practices, linked to inpatient hospital, mortality, and socio-economic status data. We studied 3 937 191 adults with median follow-up of 4.2 years (49% men, mean age=39.7 years) between April 1, 2006 and March 31, 2017. Interrupted time series analysis was used to examine the impact of changes to English hypertension guidelines in 2011 on incidence of hypertension (primary outcome). Secondary outcomes included rate of out-of-office monitoring and cardiovascular events. Across the study period, incidence of hypertension fell from 2.1 to 1.4 per 100 person-years. The change in guidance in 2011 was not associated with an immediate change in incidence (change in rate=0.01 [95% CI, -0.18-0.20]) but did result in a leveling out of the downward trend (change in yearly trend =0.09 [95% CI, 0.04-0.15]). Ambulatory monitoring increased significantly in 2011/2012 (change in rate =0.52 [95% CI, 0.43-0.60]). The rate of cardiovascular events remained unchanged (change in rate =-0.02 [95% CI, -0.05-0.02]). In summary, changes to hypertension guidelines in 2011 were associated with a stabilisation in incidence and no increase in cardiovascular events. Guidelines should continue to recommend out-of-office monitoring for diagnosis of hypertension
Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England.
BACKGROUND: Home blood-pressure (BP) monitoring is recommended in guidelines and is increasingly popular with patients and health professionals, but the accuracy of patients' own monitors in real-world use is not known. AIM: To assess the accuracy of home BP monitors used by people with hypertension, and to investigate factors affecting accuracy. DESIGN AND SETTING: Cross-sectional, observational study in urban and suburban settings in central England. METHOD: Patients (n = 6891) on the hypertension register at seven practices in the West Midlands, England, were surveyed to ascertain whether they owned a BP monitor and wanted it tested. Monitor accuracy was compared with a calibrated reference device at 50 mmHg intervals between 0-280/300 mmHg (static pressure test); a difference from the reference monitor of +/-3 mmHg at any interval was considered a failure. Cuff performance was also assessed. Results were analysed by frequency of use, length of time in service, make and model, monitor validation status, purchase price, and any previous testing. RESULTS: In total, 251 (76%, 95% confidence interval [95% CI] = 71 to 80%) of 331 tested devices passed all tests (monitors and cuffs), and 86% (CI] = 82 to 90%) passed the static pressure test; deficiencies were, primarily, because of monitors overestimating BP. A total of 40% of testable monitors were not validated. The pass rate on the static pressure test was greater in validated monitors (96%, 95% CI = 94 to 98%) versus unvalidated monitors (64%, 95% CI = 58 to 69%), those retailing for >£10 (90%, 95% CI = 86 to 94%), those retailing for ≤£10 (66%, 95% CI = 51 to 80%), those in use for ≤4 years (95%, 95% CI = 91 to 98%), and those in use for >4 years (74%, 95% CI = 67 to 82%). All in all, 12% of cuffs failed. CONCLUSION: Patients' own BP monitor failure rate was similar to that demonstrated in studies performed in professional settings, although cuff failure was more frequent. Clinicians can be confident of the accuracy of patients' own BP monitors if the devices are validated and ≤4 years old
Optimising Management of Patients with Heart Failure with Preserved Ejection Fraction in Primary Care (OPTIMISE-HFpEF): rationale and protocol for a multi-method study.
BACKGROUND: Heart failure with preserved ejection fraction (HFpEF) is less well understood than heart failure with reduced ejection fraction (HFrEF), with greater diagnostic difficulty and management uncertainty. AIM: The primary aim is to develop an optimised programme that is informed by the needs and experiences of people with HFpEF and healthcare providers. This article presents the rationale and protocol for the Optimising Management of Patients with Heart Failure with Preserved Ejection Fraction in Primary Care (OPTIMISE-HFpEF) research programme. DESIGN & SETTING: This is a multi-method programme of research conducted in the UK. METHOD: OPTIMISE-HFpEF is a multi-site programme of research with three distinct work packages (WPs). WP1 is a systematic review of heart failure disease management programmes (HF-DMPs) tested in patients with HFpEF. WP2 has three components (a, b, c) that enable the characteristics, needs, and experiences of people with HFpEF, their carers, and healthcare providers to be understood. Qualitative enquiry (WP2a) with patients and providers will be conducted in three UK sites exploring patient and provider perspectives, with an additional qualitative component (WP2c) in one site to focus on transitions in care and carer perspectives. A longitudinal cohort study (WP2b), recruiting from four UK sites, will allow patients to be characterised and their illness trajectory observed across 1 year of follow-up. Finally, WP3 will synthesise the findings and conduct work to gain consensus on how best to identify and manage this patient group. RESULTS: Results from the four work packages will be synthesised to produce a summary of key learning points and possible solutions (optimised programme) which will be presented to a broad spectrum of stakeholders to gain consensus on a way forward. CONCLUSION: HFpEF is often described as the greatest unmet need in cardiology. The OPTIMISE-HFpEF programme aims to address this need in primary care, which is arguably the most appropriate setting for managing HFpEF.NIHR National School for Primary Care Researc
- …
