69 research outputs found
Rapid antigen detection and molecular tests for group A streptococcal infections for acute sore throat : systematic reviews and economic evaluation
Background
Sore throat is a common condition caused by an infection of the airway. Most cases are of a viral nature; however, a number of these infections may be caused by the group A Streptococcus bacterium. Most viral and bacterial sore throat infections resolve spontaneously within a few weeks. Point-of-care testing in primary care has been recognised as an emerging technology for aiding targeted antibiotic prescribing for sore throat in cases that do not spontaneously resolve.
Objective
Systematically review the evidence for 21 point-of-care tests for detecting group A Streptococcus bacteria and develop a de novo economic model to compare the cost-effectiveness of point-of-care tests alongside clinical scoring tools with the cost-effectiveness of clinical scoring tools alone for patients managed in primary care and hospital settings.
Data sources
Multiple electronic databases were searched from inception to March 2019. The following databases were searched in November and December 2018 and searches were updated in March 2019: MEDLINE [via OvidSP (Health First, Rockledge, FL, USA)], MEDLINE In-Process & Other Non-Indexed Citations (via OvidSP), MEDLINE Epub Ahead of Print (via OvidSP), MEDLINE Daily Update (via OvidSP), EMBASE (via OvidSP), Cochrane Database of Systematic Reviews [via Wiley Online Library (John Wiley & Sons, Inc., Hoboken, NJ, USA)], Cochrane Central Register of Controlled Trials (CENTRAL) (via Wiley Online Library), Database of Abstracts of Reviews of Effects (DARE) (via Centre for Reviews and Dissemination), Health Technology Assessment database (via the Centre for Reviews and Dissemination), Science Citation Index and Conference Proceedings [via the Web of Science™ (Clarivate Analytics, Philadelphia, PA, USA)] and the PROSPERO International Prospective Register of Systematic Reviews (via the Centre for Reviews and Dissemination).
Review methods
Eligible studies included those of people aged ≥ 5 years presenting with sore throat symptoms, studies comparing point-of-care testing with antibiotic-prescribing decisions, studies of test accuracy and studies of cost-effectiveness. Quality assessment of eligible studies was undertaken. Meta-analysis of sensitivity and specificity was carried out for tests with sufficient data. A decision tree model estimated costs and quality-adjusted life-years from an NHS and Personal Social Services perspective.
Results
The searches identified 38 studies of clinical effectiveness and three studies of cost-effectiveness. Twenty-six full-text articles and abstracts reported on the test accuracy of point-of-care tests and/or clinical scores with biological culture as a reference standard. In the population of interest (patients with Centor/McIsaac scores of ≥ 3 points or FeverPAIN scores of ≥ 4 points), point estimates were 0.829 to 0.946 for sensitivity and 0.849 to 0.991 for specificity. There was considerable heterogeneity, even for studies using the same point-of-care test, suggesting that is unlikely that any single study will have accurately captured a test’s true performance. There is some randomised controlled trial evidence to suggest that the use of rapid antigen detection tests may help to reduce antibiotic-prescribing rates. Sensitivity and specificity estimates for each test in each age group and care setting combination were obtained using meta-analyses where appropriate. Any apparent differences in test accuracy may not be attributable to the tests, and may have been caused by known differences in the studies, latent characteristics or chance. Fourteen of the 21 tests reviewed were included in the economic modelling, and these tests were not cost-effective within the current National Institute for Health and Care Excellence’s cost-effectiveness thresholds. Uncertainties in the cost-effectiveness estimates included model parameter inputs and assumptions that increase the cost of testing, and the penalty for antibiotic overprescriptions.
Limitations
No information was identified for the elderly population or pharmacy setting. It was not possible to identify which test is the most accurate owing to the paucity of evidence.
Conclusions
The systematic review and the cost-effectiveness models identified uncertainties around the adoption of point-of-care tests in primary and secondary care settings. Although sensitivity and specificity estimates are promising, we have little information to establish the most accurate point-of-care test. Further research is needed to understand the test accuracy of point-of-care tests in the proposed NHS pathway and in comparable settings and patient groups
Neurotrophins, cytokines and oxidative stress mediators, and mood state in bipolar disorder:systematic review and meta-analyses
BACKGROUND: A reliable biomarker signature for bipolar disorder sensitive to illness phase would be of considerable clinical benefit. Among circulating blood-derived markers there has been a significant amount of research into inflammatory markers, neurotrophins and oxidative stress markers.AimsTo synthesise and interpret existing evidence of inflammatory markers, neurotrophins and oxidative stress markers in bipolar disorder focusing on the mood phase of illness. METHOD: Following PRISMA (Preferred Reporting Items for Systematic reviews and Meta-analyses) guidelines, a systematic review was conducted for studies investigating peripheral biomarkers in bipolar disorder compared with healthy controls. We searched Medline, Embase, PsycINFO, SciELO and Web of Science, and separated studies by bipolar mood phase (mania, depression and euthymia). Extracted data on each biomarker in separate mood phases were synthesised using random-effects model meta-analyses. RESULTS: In total, 53 studies were included, comprising 2467 cases and 2360 controls. Fourteen biomarkers were identified from meta-analyses of three or more studies. No biomarker differentiated mood phase in bipolar disorder individually. Biomarker meta-analyses suggest a combination of high-sensitivity C-reactive protein/interleukin-6, brain derived neurotrophic factor/tumour necrosis factor (TNF)-α and soluble TNF-α receptor 1 can differentiate specific mood phase in bipolar disorder. Several other biomarkers of interest were identified. CONCLUSIONS: Combining biomarker results could differentiate individuals with bipolar disorder from healthy controls and indicate a specific mood-phase signature. Future research should seek to test these combinations of biomarkers in longitudinal studies.Declaration of interestNone
Dysregulation of the hypothalamic pituitary adrenal (HPA) axis and physical performance at older ages:an individual participant meta-analysis
The association between functioning of the hypothalamic pituitary adrenal (HPA) axis and physical performance at older ages remains poorly understood. We carried out meta-analyses to test the hypothesis that dysregulation of the HPA axis, as indexed by patterns of diurnal cortisol release, is associated with worse physical performance. Data from six adult cohorts (ages 50–92 years) were included in a two stage meta-analysis of individual participant data. We analysed each study separately using linear and logistic regression models and then used meta-analytic methods to pool the results. Physical performance outcome measures were walking speed, balance time, chair rise time and grip strength. Exposure measures were morning (serum and salivary) and evening (salivary) cortisol. Total sample sizes in meta-analyses ranged from n = 2146 for associations between morning Cortisol Awakening Response and balance to n = 8448 for associations between morning cortisol and walking speed. A larger diurnal drop was associated with faster walking speed (standardised coefficient per SD increase 0.052, 95% confidence interval (CI) 0.029, 0.076, p < 0.001; age and gender adjusted) and a quicker chair rise time (standardised coefficient per SD increase −0.075, 95% CI −0.116, −0.034, p < 0.001; age and gender adjusted). There was little evidence of associations with balance or grip strength. Greater diurnal decline of the HPA axis is associated with better physical performance in later life. This may reflect a causal effect of the HPA axis on performance or that other ageing-related factors are associated with both reduced HPA reactivity and performance
Childhood socioeconomic position and objectively measured physical capability levels in adulthood: a systematic review and meta-analysis
<p><b>Background:</b> Grip strength, walking speed, chair rising and standing balance time are objective measures of physical capability that characterise current health and predict survival in older populations. Socioeconomic position (SEP) in childhood may influence the peak level of physical capability achieved in early adulthood, thereby affecting levels in later adulthood. We have undertaken a systematic review with meta-analyses to test the hypothesis that adverse childhood SEP is associated with lower levels of objectively measured physical capability in adulthood.</p>
<p><b>Methods and Findings:</b> Relevant studies published by May 2010 were identified through literature searches using EMBASE and MEDLINE. Unpublished results were obtained from study investigators. Results were provided by all study investigators in a standard format and pooled using random-effects meta-analyses. 19 studies were included in the review. Total sample sizes in meta-analyses ranged from N = 17,215 for chair rise time to N = 1,061,855 for grip strength. Although heterogeneity was detected, there was consistent evidence in age adjusted models that lower childhood SEP was associated with modest reductions in physical capability levels in adulthood: comparing the lowest with the highest childhood SEP there was a reduction in grip strength of 0.13 standard deviations (95% CI: 0.06, 0.21), a reduction in mean walking speed of 0.07 m/s (0.05, 0.10), an increase in mean chair rise time of 6% (4%, 8%) and an odds ratio of an inability to balance for 5s of 1.26 (1.02, 1.55). Adjustment for the potential mediating factors, adult SEP and body size attenuated associations greatly. However, despite this attenuation, for walking speed and chair rise time, there was still evidence of moderate associations.</p>
<p><b>Conclusions:</b> Policies targeting socioeconomic inequalities in childhood may have additional benefits in promoting the maintenance of independence in later life.</p>
Internal validation of STRmix™ – A multi laboratory response to PCAST
We report a large compilation of the internal validations of the probabilistic genotyping software STRmix™. Thirty one laboratories contributed data resulting in 2825 mixtures comprising three to six donors and a wide range of multiplex, equipment, mixture proportions and templates. Previously reported trends in the LR were confirmed including less discriminatory LRs occurring both for donors and non-donors at low template (for the donor in question) and at high contributor number. We were unable to isolate an effect of allelic sharing. Any apparent effect appears to be largely confounded with increased contributor number
Body mass index, muscle strength and physical performance in older adults from eight cohort studies: the HALCyon programme.
Objective
To investigate the associations of body mass index (BMI) and grip strength with objective measures of physical performance (chair rise time, walking speed and balance) including an assessment of sex differences and non-linearity.
Methods
Cross-sectional data from eight UK cohort studies (total N = 16 444) participating in the Healthy Ageing across the Life Course (HALCyon) research programme, ranging in age from 50 to 90+ years at the time of physical capability assessment, were used. Regression models were fitted within each study and meta-analysis methods used to pool regression coefficients across studies and to assess the extent of heterogeneity between studies.
Results
Higher BMI was associated with poorer performance on chair rise (N = 10 773), walking speed (N = 9 761) and standing balance (N = 13 921) tests. Higher BMI was associated with stronger grip strength in men only. Stronger grip strength was associated with better performance on all tests with a tendency for the associations to be stronger in women than men; for example, walking speed was higher by 0.43 cm/s (0.14, 0.71) more per kg in women than men. Both BMI and grip strength remained independently related with performance after mutual adjustment, but there was no evidence of effect modification. Both BMI and grip strength exhibited non-linear relations with performance; those in the lowest fifth of grip strength and highest fifth of BMI having particularly poor performance. Findings were similar when waist circumference was examined in place of BMI.
Conclusion
Older men and women with weak muscle strength and high BMI have considerably poorer performance than others and associations were observed even in the youngest cohort (age 53). Although causality cannot be inferred from observational cross-sectional studies, our findings suggest the likely benefit of early assessment and interventions to reduce fat mass and improve muscle strength in the prevention of future functional limitations
Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction: An Individual-Participant-Data Meta-Analysis
The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data encompassing 1962-2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary measures from longitudinal modeling of the repeated measurements were compared with models using measurements from a single time point. Risk discrimination (Cindex) and net reclassification were calculated, and changes in C-indices were meta-analyzed across studies. Compared with the single-time-point model, the cumulative means and longitudinal models increased the C-index by 0.0040 (95% confidence interval (CI): 0.0023, 0.0057) and 0.0023 (95% CI: 0.0005, 0.0042), respectively. Reclassification was also improved in both models; compared with the single-time-point model, overall net reclassification improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction
Cardiovascular Risk Factors Associated With Venous Thromboembolism.
IMPORTANCE: It is uncertain to what extent established cardiovascular risk factors are associated with venous thromboembolism (VTE). OBJECTIVE: To estimate the associations of major cardiovascular risk factors with VTE, ie, deep vein thrombosis and pulmonary embolism. DESIGN, SETTING, AND PARTICIPANTS: This study included individual participant data mostly from essentially population-based cohort studies from the Emerging Risk Factors Collaboration (ERFC; 731 728 participants; 75 cohorts; years of baseline surveys, February 1960 to June 2008; latest date of follow-up, December 2015) and the UK Biobank (421 537 participants; years of baseline surveys, March 2006 to September 2010; latest date of follow-up, February 2016). Participants without cardiovascular disease at baseline were included. Data were analyzed from June 2017 to September 2018. EXPOSURES: A panel of several established cardiovascular risk factors. MAIN OUTCOMES AND MEASURES: Hazard ratios (HRs) per 1-SD higher usual risk factor levels (or presence/absence). Incident fatal outcomes in ERFC (VTE, 1041; coronary heart disease [CHD], 25 131) and incident fatal/nonfatal outcomes in UK Biobank (VTE, 2321; CHD, 3385). Hazard ratios were adjusted for age, sex, smoking status, diabetes, and body mass index (BMI). RESULTS: Of the 731 728 participants from the ERFC, 403 396 (55.1%) were female, and the mean (SD) age at the time of the survey was 51.9 (9.0) years; of the 421 537 participants from the UK Biobank, 233 699 (55.4%) were female, and the mean (SD) age at the time of the survey was 56.4 (8.1) years. Risk factors for VTE included older age (ERFC: HR per decade, 2.67; 95% CI, 2.45-2.91; UK Biobank: HR, 1.81; 95% CI, 1.71-1.92), current smoking (ERFC: HR, 1.38; 95% CI, 1.20-1.58; UK Biobank: HR, 1.23; 95% CI, 1.08-1.40), and BMI (ERFC: HR per 1-SD higher BMI, 1.43; 95% CI, 1.35-1.50; UK Biobank: HR, 1.37; 95% CI, 1.32-1.41). For these factors, there were similar HRs for pulmonary embolism and deep vein thrombosis in UK Biobank (except adiposity was more strongly associated with pulmonary embolism) and similar HRs for unprovoked vs provoked VTE. Apart from adiposity, these risk factors were less strongly associated with VTE than CHD. There were inconsistent associations of VTEs with diabetes and blood pressure across ERFC and UK Biobank, and there was limited ability to study lipid and inflammation markers. CONCLUSIONS AND RELEVANCE: Older age, smoking, and adiposity were consistently associated with higher VTE risk.This research has been conducted using the UK Biobank resource under Application Number 26865. This work was supported by underpinning grants from the UK Medical Research Council (grant G0800270), the British Heart Foundation (grant SP/09/002), the British Heart Foundation Cambridge Cardiovascular Centre of Excellence, UK National Institute for Health Research Cambridge Biomedical Research Centre, European Research Council (grant 268834), the European Commission Framework Programme 7 (grant HEALTH-F2-2012-279233), and Health Data Research UK. Dr Danesh holds a British Heart Foundation Personal Chair and a National Institute for Health Research Senior Investigator Award
Longer-term efficiency and safety of increasing the frequency of whole blood donation (INTERVAL): extension study of a randomised trial of 20 757 blood donors
Background:
The INTERVAL trial showed that, over a 2-year period, inter-donation intervals for whole blood donation can be safely reduced to meet blood shortages. We extended the INTERVAL trial for a further 2 years to evaluate the longer-term risks and benefits of varying inter-donation intervals, and to compare routine versus more intensive reminders to help donors keep appointments.
Methods:
The INTERVAL trial was a parallel group, pragmatic, randomised trial that recruited blood donors aged 18 years or older from 25 static donor centres of NHS Blood and Transplant across England, UK. Here we report on the prespecified analyses after 4 years of follow-up. Participants were whole blood donors who agreed to continue trial participation on their originally allocated inter-donation intervals (men: 12, 10, and 8 weeks; women: 16, 14, and 12 weeks). They were further block-randomised (1:1) to routine versus more intensive reminders using computer-generated random sequences. The prespecified primary outcome was units of blood collected per year analysed in the intention-to-treat population. Secondary outcomes related to safety were quality of life, self-reported symptoms potentially related to donation, haemoglobin and ferritin concentrations, and deferrals because of low haemoglobin and other factors. This trial is registered with ISRCTN, number ISRCTN24760606, and has completed.
Findings:
Between Oct 19, 2014, and May 3, 2016, 20 757 of the 38 035 invited blood donors (10 843 [58%] men, 9914 [51%] women) participated in the extension study. 10 378 (50%) were randomly assigned to routine reminders and 10 379 (50%) were randomly assigned to more intensive reminders. Median follow-up was 1·1 years (IQR 0·7–1·3). Compared with routine reminders, more intensive reminders increased blood collection by a mean of 0·11 units per year (95% CI 0·04–0·17; p=0·0003) in men and 0·06 units per year (0·01–0·11; p=0·0094) in women. During the extension study, each week shorter inter-donation interval increased blood collection by a mean of 0·23 units per year (0·21–0·25) in men and 0·14 units per year (0·12–0·15) in women (both p<0·0001). More frequent donation resulted in more deferrals for low haemoglobin (odds ratio per week shorter inter-donation interval 1·19 [95% CI 1·15–1·22] in men and 1·10 [1·06–1·14] in women), and lower mean haemoglobin (difference per week shorter inter-donation interval −0·84 g/L [95% CI −0·99 to −0·70] in men and −0·45 g/L [–0·59 to −0·31] in women) and ferritin concentrations (percentage difference per week shorter inter-donation interval −6·5% [95% CI −7·6 to −5·5] in men and −5·3% [–6·5 to −4·2] in women; all p<0·0001). No differences were observed in quality of life, serious adverse events, or self-reported symptoms (p>0.0001 for tests of linear trend by inter-donation intervals) other than a higher reported frequency of doctor-diagnosed low iron concentrations and prescription of iron supplements in men (p<0·0001).
Interpretation:
During a period of up to 4 years, shorter inter-donation intervals and more intensive reminders resulted in more blood being collected without a detectable effect on donors' mental and physical wellbeing. However, donors had decreased haemoglobin concentrations and more self-reported symptoms compared with the initial 2 years of the trial. Our findings suggest that blood collection services could safely use shorter donation intervals and more intensive reminders to meet shortages, for donors who maintain adequate haemoglobin concentrations and iron stores.
Funding:
NHS Blood and Transplant, UK National Institute for Health Research, UK Medical Research Council, and British Heart Foundation
- …