33 research outputs found

    The impact of citrate introduction at UK syringe exchange programmes: a retrospective cohort study in Cheshire and Merseyside, UK

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In 2003, it became legal in the UK for syringe exchange programmes (SEPs) to provide citrate to injecting drug users to solubilise heroin. Little work has been undertaken on the effect of policy change on SEP function. Here, we examine whether the introduction of citrate in Cheshire and Merseyside SEPs has altered the number of heroin/crack injectors accessing SEPs, the frequency at which heroin/crack injectors visited SEPs and the number of syringes dispensed.</p> <p>Methods</p> <p>Eleven SEPs in Cheshire and Merseyside commenced citrate provision in 2003. SEP-specific data for the six months before and six months after citrate was introduced were extracted from routine monitoring systems relating to heroin and crack injectors. Analyses compared all individuals attending pre and post citrate and matched analyses only those individuals attending in both periods (defined as 'longitudinal attenders'). Non-parametric tests were used throughout.</p> <p>Results</p> <p>Neither new (first seen in either six months period) nor established clients visited SEPs more frequently post citrate. New clients collected significantly less syringes per visit post citrate, than pre citrate (14.5,10.0; z = 1.992, P < 0.05). Matched pair analysis showed that the median number of visits for 'longitudinal attenders' (i.e. those who attended in both pre and post citrate periods) increased from four pre citrate to five post citrate (z = 2.187, P < 0.05) but the number of syringes collected remained unchanged. These changes were not due to seasonal variation or other changes in service configuration.</p> <p>Conclusion</p> <p>The introduction of citrate did not negatively affect SEP attendance. 'Longitudinal attenders' visited SEPs more frequently post citrate, providing staff with greater opportunity for intervention and referral. As the number of syringes they collected each visit remained unchanged the total number of clean syringes made available to this group of injectors increased very slightly between the pre and post citrate periods. However, new clients collected significantly less syringes post citrate than pre citrate, possibly due to staff concerns regarding the amount of citrate (and thus syringes) to dispense safely to new clients. These concerns should not be allowed to negatively impact on the number of syringes dispensed.</p

    Guided graded exercise self-help plus specialist medical care versus specialist medical care alone for chronic fatigue syndrome (GETSET): a pragmatic randomised controlled trial

    Get PDF
    Background: Graded exercise therapy is an effective and safe treatment for chronic fatigue syndrome, but it is therapist intensive and availability is limited. We aimed to test the efficacy and safety of graded exercise delivered as guided self-help. Methods: In this pragmatic randomised controlled trial, we recruited adult patients (18 years and older) who met the UK National Institute for Health and Care Excellence criteria for chronic fatigue syndrome from two secondary-care clinics in the UK. Patients were randomly assigned to receive specialist medical care (SMC) alone (control group) or SMC with additional guided graded exercise self-help (GES). Block randomisation (randomly varying block sizes) was done at the level of the individual with a computer-generated sequence and was stratified by centre, depression score, and severity of physical disability. Patients and physiotherapists were necessarily unmasked from intervention assignment; the statistician was masked from intervention assignment. SMC was delivered by specialist doctors but was not standardised; GES consisted of a self-help booklet describing a six-step graded exercise programme that would take roughly 12 weeks to complete, and up to four guidance sessions with a physiotherapist over 8 weeks (maximum 90 min in total). Primary outcomes were fatigue (measured by the Chalder Fatigue Questionnaire) and physical function (assessed by the Short Form-36 physical function subscale); both were self-rated by patients at 12 weeks after randomisation and analysed in all randomised patients with outcome data at follow-up (ie, by modified intention to treat). We recorded adverse events, including serious adverse reactions to trial interventions. We used multiple linear regression analysis to compare SMC with GES, adjusting for baseline and stratification factors. This trial is registered at ISRCTN, number ISRCTN22975026. Findings: Between May 15, 2012, and Dec 24, 2014, we recruited 211 eligible patients, of whom 107 were assigned to the GES group and 104 to the control group. At 12 weeks, compared with the control group, mean fatigue score was 19·1 (SD 7·6) in the GES group and 22·9 (6·9) in the control group (adjusted difference −4·2 points, 95% CI −6·1 to −2·3, p<0·0001; effect size 0·53) and mean physical function score was 55·7 (23·3) in the GES group and 50·8 (25·3) in the control group (adjusted difference 6·3 points, 1·8 to 10·8, p=0·006; 0·20). No serious adverse reactions were recorded and other safety measures did not differ between the groups, after allowing for missing data. Interpretation: GES is a safe intervention that might reduce fatigue and, to a lesser extent, physical disability for patients with chronic fatigue syndrome. These findings need confirmation and extension to other health-care settings

    Effect of telecare on use of health and social care services: findings from the Whole Systems Demonstrator cluster randomised trial

    Get PDF
    Objective: to assess the impact of telecare on the use of social and health care. Part of the evaluation of the Whole Systems Demonstrator trial. Participants and setting: a total of 2,600 people with social care needs were recruited from 217 general practices in three areas in England. Design: a cluster randomised trial comparing telecare with usual care, general practice being the unit of randomisation. Participants were followed up for 12 months and analyses were conducted as intention-to-treat. Data sources: trial data were linked at the person level to administrative data sets on care funded at least in part by local authorities or the National Health Service. Main outcome measures: the proportion of people admitted to hospital within 12 months. Secondary endpoints included mortality, rates of secondary care use (seven different metrics), contacts with general practitioners and practice nurses, proportion of people admitted to permanent residential or nursing care, weeks in domiciliary social care and notional costs. Results: 46.8% of intervention participants were admitted to hospital, compared with 49.2% of controls. Unadjusted differences were not statistically significant (odds ratio: 0.90, 95% CI: 0.75–1.07, P = 0.211). They reached statistical significance after adjusting for baseline covariates, but this was not replicated when adjusting for the predictive risk score. Secondary metrics including impacts on social care use were not statistically significant. Conclusions: telecare as implemented in the Whole Systems Demonstrator trial did not lead to significant reductions in service use, at least in terms of results assessed over 12 months

    Effect of telehealth on quality of life and psychological outcomes over 12 months (Whole Systems Demonstrator telehealth questionnaire study): nested study of patient reported outcomes in a pragmatic, cluster randomised controlled trial

    Get PDF
    Objective:- To assess the effect of second generation, home based telehealth on health related quality of life, anxiety, and depressive symptoms over 12 months in patients with long term conditions. Design:- A study of patient reported outcomes (the Whole Systems Demonstrator telehealth questionnaire study; baseline n=1573) was nested in a pragmatic, cluster randomised trial of telehealth (the Whole Systems Demonstrator telehealth trial, n=3230). General practice was the unit of randomisation, and telehealth was compared with usual care. Data were collected at baseline, four months (short term), and 12 months (long term). Primary intention to treat analyses tested treatment effectiveness; multilevel models controlled for clustering by general practice and a range of covariates. Analyses were conducted for 759 participants who completed questionnaire measures at all three time points (complete case cohort) and 1201 who completed the baseline assessment plus at least one other assessment (available case cohort). Secondary per protocol analyses tested treatment efficacy and included 633 and 1108 participants in the complete case and available case cohorts, respectively. Setting:- Provision of primary and secondary care via general practices, specialist nurses, and hospital clinics in three diverse regions of England (Cornwall, Kent, and Newham), with established integrated health and social care systems. Participants:- Patients with chronic obstructive pulmonary disease (COPD), diabetes, or heart failure recruited between May 2008 and December 2009. Main outcome measures:- Generic, health related quality of life (assessed by physical and mental health component scores of the SF-12, and the EQ-5D), anxiety (assessed by the six item Brief State-Trait Anxiety Inventory), and depressive symptoms (assessed by the 10 item Centre for Epidemiological Studies Depression Scale). Results:- In the intention to treat analyses, differences between treatment groups were small and non-significant for all outcomes in the complete case (0.480≤P≤0.904) or available case (0.181≤P≤0.905) cohorts. The magnitude of differences between trial arms did not reach the trial defined, minimal clinically important difference (0.3 standardised mean difference) for any outcome in either cohort at four or 12 months. Per protocol analyses replicated the primary analyses; the main effect of trial arm (telehealth v usual care) was non-significant for any outcome (complete case cohort 0.273≤P≤0.761; available case cohort 0.145≤P≤0.696). Conclusions:- Second generation, home based telehealth as implemented in the Whole Systems Demonstrator Evaluation was not effective or efficacious compared with usual care only. Telehealth did not improve quality of life or psychological outcomes for patients with chronic obstructive pulmonary disease, diabetes, or heart failure over 12 months. The findings suggest that concerns about potentially deleterious effect of telehealth are unfounded for most patients. Trial Registration: ISRCTN43002091

    Level of agreement between frequently used cardiovascular risk calculators in people living with HIV

    Get PDF
    Objectives The aim of the study was to describe agreement between the QRISK2, Framingham and Data Collection on Adverse Events of Anti‐HIV Drugs (D:A:D) cardiovascular disease (CVD) risk calculators in a large UK study of people living with HIV (PLWH). Methods PLWH enrolled in the Pharmacokinetic and Clinical Observations in People over Fifty (POPPY) study without a prior CVD event were included in this study. QRISK2, Framingham CVD and the full and reduced D:A:D CVD scores were calculated; participants were stratified into ‘low’ ( 20%) categories for each. Agreement between scores was assessed using weighted kappas and Bland–Altman plots. Results The 730 included participants were predominantly male (636; 87.1%) and of white ethnicity (645; 88.5%), with a median age of 53 [interquartile range (IQR) 49–59] years. The median calculated 10‐year CVD risk was 11.9% (IQR 6.8–18.4%), 8.9% (IQR 4.6–15.0%), 8.5% (IQR 4.8–14.6%) and 6.9% (IQR 4.1–11.1%) when using the Framingham, QRISK2, and full and reduced D:A:D scores, respectively. Agreement between the different scores was generally moderate, with the highest level of agreement being between the Framingham and QRISK2 scores (weighted kappa = 0.65) but with most other kappa coefficients in the 0.50–0.60 range. Conclusions Estimates of predicted 10‐year CVD risk obtained with commonly used CVD risk prediction tools demonstrate, in general, only moderate agreement among PLWH in the UK. While further validation with clinical endpoints is required, our findings suggest that care should be taken when interpreting any score alone

    Depression, lifestyle factors and cognitive function in people living with HIV and comparable HIV-negative controls

    Get PDF
    We investigated whether differences in cognitive performance between people living with HIV (PLWH) and comparable HIV-negative people were mediated or moderated by depressive symptoms and lifestyle factors. METHODS: A cross-sectional study of 637 'older' PLWH aged ≥ 50 years, 340 'younger' PLWH aged < 50 years and 276 demographically matched HIV-negative controls aged ≥ 50 years enrolled in the Pharmacokinetic and Clinical Observations in People over Fifty (POPPY) study was performed. Cognitive function was assessed using a computerized battery (CogState). Scores were standardized into Z-scores [mean = 0; standard deviation (SD) = 1] and averaged to obtain a global Z-score. Depressive symptoms were evaluated via the Patient Health Questionnaire (PHQ-9). Differences between the three groups and the effects of depression, sociodemographic factors and lifestyle factors on cognitive performance were evaluated using median regression. All analyses accounted for age, gender, ethnicity and level of education. RESULTS: After adjustment for sociodemographic factors, older and younger PLWH had poorer overall cognitive scores than older HIV-negative controls (P < 0.001 and P = 0.006, respectively). Moderate or severe depressive symptoms were more prevalent in both older (27%; P < 0.001) and younger (21%; P < 0.001) PLWH compared with controls (8%). Depressive symptoms (P < 0.001) and use of hashish (P = 0.01) were associated with lower cognitive function; alcohol consumption (P = 0.02) was associated with better cognitive scores. After further adjustment for these factors, the difference between older PLWH and HIV-negative controls was no longer significant (P = 0.08), while that between younger PLWH and older HIV-negative controls remained significant (P = 0.01). CONCLUSIONS: Poorer cognitive performances in PLWH compared with HIV-negative individuals were, in part, mediated by the greater prevalence of depressive symptoms and recreational drug use reported by PLWH

    Heterozygous missense variants of LMX1A lead to nonsyndromic hearing impairment and vestibular dysfunction

    Get PDF
    Unraveling the causes and pathomechanisms of progressive disorders is essential for the development of therapeutic strategies. Here, we identified heterozygous pathogenic missense variants of LMX1A in two families of Dutch origin with progressive nonsyndromic hearing impairment (HI), using whole exome sequencing. One variant, c.721G > C (p.Val241Leu), occurred de novo and is predicted to affect the homeodomain of LMX1A, which is essential for DNA binding. The second variant, c.290G > C (p.Cys97Ser), predicted to affect a zinc-binding residue of the second LIM domain that is involved in protein–protein interactions. Bi-allelic deleterious variants of Lmx1a are associated with a complex phenotype in mice, including deafness and vestibular defects, due to arrest of inner ear development. Although Lmx1a mouse mutants demonstrate neurological, skeletal, pigmentation and reproductive system abnormalities, no syndromic features were present in the participating subjects of either family. LMX1A has previously been suggested as a candidate gene for intellectual disability, but our data do not support this, as affected subjects displayed normal cognition. Large variability was observed in the age of onset (a)symmetry, severity and progression rate of HI. About half of the affected individuals displayed vestibular dysfunction and experienced symptoms thereof. The late-onset progressive phenotype and the absence of cochleovestibular malformations on computed tomography scans indicate that heterozygous defects of LMX1A do not result in severe developmental abnormalities in humans. We propose that a single LMX1A wild-type copy is sufficient for normal development but insufficient for maintenance of cochleovestibular function. Alternatively, minor cochleovestibular developmental abnormalities could eventually lead to the progressive phenotype seen in the families

    Arrhythmia and death following percutaneous revascularization in ischemic left ventricular dysfunction: Prespecified analyses from the REVIVED-BCIS2 trial

    Get PDF
    BACKGROUND: Ventricular arrhythmia is an important cause of mortality in patients with ischemic left ventricular dysfunction. Revascularization with coronary artery bypass graft or percutaneous coronary intervention is often recommended for these patients before implantation of a cardiac defibrillator because it is assumed that this may reduce the incidence of fatal and potentially fatal ventricular arrhythmias, although this premise has not been evaluated in a randomized trial to date. METHODS: Patients with severe left ventricular dysfunction, extensive coronary disease, and viable myocardium were randomly assigned to receive either percutaneous coronary intervention (PCI) plus optimal medical and device therapy (OMT) or OMT alone. The composite primary outcome was all-cause death or aborted sudden death (defined as an appropriate implantable cardioverter defibrillator therapy or a resuscitated cardiac arrest) at a minimum of 24 months, analyzed as time to first event on an intention-to-treat basis. Secondary outcomes included cardiovascular death or aborted sudden death, appropriate implantable cardioverter defibrillator (ICD) therapy or sustained ventricular arrhythmia, and number of appropriate ICD therapies. RESULTS: Between August 28, 2013, and March 19, 2020, 700 patients were enrolled across 40 centers in the United Kingdom. A total of 347 patients were assigned to the PCI+OMT group and 353 to the OMT alone group. The mean age of participants was 69 years; 88% were male; 56% had hypertension; 41% had diabetes; and 53% had a clinical history of myocardial infarction. The median left ventricular ejection fraction was 28%; 53.1% had an implantable defibrillator inserted before randomization or during follow-up. All-cause death or aborted sudden death occurred in 144 patients (41.6%) in the PCI group and 142 patients (40.2%) in the OMT group (hazard ratio, 1.03 [95% CI, 0.82–1.30]; P =0.80). There was no between-group difference in the occurrence of any of the secondary outcomes. CONCLUSIONS: PCI was not associated with a reduction in all-cause mortality or aborted sudden death. In patients with ischemic cardiomyopathy, PCI is not beneficial solely for the purpose of reducing potentially fatal ventricular arrhythmias. REGISTRATION: URL: https://www.clinicaltrials.gov ; Unique identifier: NCT01920048
    corecore