21 research outputs found

    Efficiency and safety of varying the frequency of whole blood donation (INTERVAL): a randomised trial of 45 000 donors

    Get PDF
    Background: Limits on the frequency of whole blood donation exist primarily to safeguard donor health. However, there is substantial variation across blood services in the maximum frequency of donations allowed. We compared standard practice in the UK with shorter inter-donation intervals used in other countries. Methods: In this parallel group, pragmatic, randomised trial, we recruited whole blood donors aged 18 years or older from 25 centres across England, UK. By use of a computer-based algorithm, men were randomly assigned (1:1:1) to 12-week (standard) versus 10-week versus 8-week inter-donation intervals, and women were randomly assigned (1:1:1) to 16-week (standard) versus 14-week versus 12-week intervals. Participants were not masked to their allocated intervention group. The primary outcome was the number of donations over 2 years. Secondary outcomes related to safety were quality of life, symptoms potentially related to donation, physical activity, cognitive function, haemoglobin and ferritin concentrations, and deferrals because of low haemoglobin. This trial is registered with ISRCTN, number ISRCTN24760606, and is ongoing but no longer recruiting participants. Findings: 45 263 whole blood donors (22 466 men, 22 797 women) were recruited between June 11, 2012, and June 15, 2014. Data were analysed for 45 042 (99·5%) participants. Men were randomly assigned to the 12-week (n=7452) versus 10-week (n=7449) versus 8-week (n=7456) groups; and women to the 16-week (n=7550) versus 14-week (n=7567) versus 12-week (n=7568) groups. In men, compared with the 12-week group, the mean amount of blood collected per donor over 2 years increased by 1·69 units (95% CI 1·59–1·80; approximately 795 mL) in the 8-week group and by 0·79 units (0·69–0·88; approximately 370 mL) in the 10-week group (p<0·0001 for both). In women, compared with the 16-week group, it increased by 0·84 units (95% CI 0·76–0·91; approximately 395 mL) in the 12-week group and by 0·46 units (0·39–0·53; approximately 215 mL) in the 14-week group (p<0·0001 for both). No significant differences were observed in quality of life, physical activity, or cognitive function across randomised groups. However, more frequent donation resulted in more donation-related symptoms (eg, tiredness, breathlessness, feeling faint, dizziness, and restless legs, especially among men [for all listed symptoms]), lower mean haemoglobin and ferritin concentrations, and more deferrals for low haemoglobin (p<0·0001 for each) than those observed in the standard frequency groups. Interpretation: Over 2 years, more frequent donation than is standard practice in the UK collected substantially more blood without having a major effect on donors' quality of life, physical activity, or cognitive function, but resulted in more donation-related symptoms, deferrals, and iron deficiency. Funding: NHS Blood and Transplant, National Institute for Health Research, UK Medical Research Council, and British Heart Foundation

    Longer-term efficiency and safety of increasing the frequency of whole blood donation (INTERVAL): extension study of a randomised trial of 20 757 blood donors

    Get PDF
    Background: The INTERVAL trial showed that, over a 2-year period, inter-donation intervals for whole blood donation can be safely reduced to meet blood shortages. We extended the INTERVAL trial for a further 2 years to evaluate the longer-term risks and benefits of varying inter-donation intervals, and to compare routine versus more intensive reminders to help donors keep appointments. Methods: The INTERVAL trial was a parallel group, pragmatic, randomised trial that recruited blood donors aged 18 years or older from 25 static donor centres of NHS Blood and Transplant across England, UK. Here we report on the prespecified analyses after 4 years of follow-up. Participants were whole blood donors who agreed to continue trial participation on their originally allocated inter-donation intervals (men: 12, 10, and 8 weeks; women: 16, 14, and 12 weeks). They were further block-randomised (1:1) to routine versus more intensive reminders using computer-generated random sequences. The prespecified primary outcome was units of blood collected per year analysed in the intention-to-treat population. Secondary outcomes related to safety were quality of life, self-reported symptoms potentially related to donation, haemoglobin and ferritin concentrations, and deferrals because of low haemoglobin and other factors. This trial is registered with ISRCTN, number ISRCTN24760606, and has completed. Findings: Between Oct 19, 2014, and May 3, 2016, 20 757 of the 38 035 invited blood donors (10 843 [58%] men, 9914 [51%] women) participated in the extension study. 10 378 (50%) were randomly assigned to routine reminders and 10 379 (50%) were randomly assigned to more intensive reminders. Median follow-up was 1·1 years (IQR 0·7–1·3). Compared with routine reminders, more intensive reminders increased blood collection by a mean of 0·11 units per year (95% CI 0·04–0·17; p=0·0003) in men and 0·06 units per year (0·01–0·11; p=0·0094) in women. During the extension study, each week shorter inter-donation interval increased blood collection by a mean of 0·23 units per year (0·21–0·25) in men and 0·14 units per year (0·12–0·15) in women (both p<0·0001). More frequent donation resulted in more deferrals for low haemoglobin (odds ratio per week shorter inter-donation interval 1·19 [95% CI 1·15–1·22] in men and 1·10 [1·06–1·14] in women), and lower mean haemoglobin (difference per week shorter inter-donation interval −0·84 g/L [95% CI −0·99 to −0·70] in men and −0·45 g/L [–0·59 to −0·31] in women) and ferritin concentrations (percentage difference per week shorter inter-donation interval −6·5% [95% CI −7·6 to −5·5] in men and −5·3% [–6·5 to −4·2] in women; all p<0·0001). No differences were observed in quality of life, serious adverse events, or self-reported symptoms (p>0.0001 for tests of linear trend by inter-donation intervals) other than a higher reported frequency of doctor-diagnosed low iron concentrations and prescription of iron supplements in men (p<0·0001). Interpretation: During a period of up to 4 years, shorter inter-donation intervals and more intensive reminders resulted in more blood being collected without a detectable effect on donors' mental and physical wellbeing. However, donors had decreased haemoglobin concentrations and more self-reported symptoms compared with the initial 2 years of the trial. Our findings suggest that blood collection services could safely use shorter donation intervals and more intensive reminders to meet shortages, for donors who maintain adequate haemoglobin concentrations and iron stores. Funding: NHS Blood and Transplant, UK National Institute for Health Research, UK Medical Research Council, and British Heart Foundation

    New Product Diffusion Acceleration: Measurement and Analysis

    No full text
    It is a popular contention that products launched today diffuse faster than products launched in the past. However, the evidence of diffusion acceleration is rather scant, and the methodology used in previous studies has several weaknesses. Also, little is known about why such acceleration would have occurred. This study investigates changes in diffusion speed in the United States over a period of 74 years (1923–1996) using data on 31 electrical household durables. This study defines diffusion speed as the time it takes to go from one penetration level to a higher level, and it measures speed using the slope coefficient of the logistic diffusion model. This metric relates unambiguously both to speed as just defined and to the empirical growth rate, a measure of instantaneous penetration growth. The data are analyzed using a single-stage hierarchical modeling approach for all products simultaneously in which parameters capturing the adoption ceilings are estimated jointly with diffusion speed parameters. The variance in diffusion speed across and within products is represented separately but analyzed simultaneously. The focus of this study is on description and explanation rather than forecasting or normative prescription. There are three main findings. 1. On average, there has been an increase in diffusion speed that is statistically significant and rather sizable. For the set of 31 consumer durables, the average value of the slope parameter in the logistic model's hazard function was roughly 0.48, increasing with 0.09 about every 10 years. It took an innovation reaching 5% household penetration in 1946 an estimated 13.8 years to go from 10% to 90% of its estimated maximum adoption ceiling. For an innovation reaching 5% penetration in 1980, that time would have been halved to 6.9 years. This corresponds to a compound growth rate in diffusion speed of roughly 2% between 1946 and 1980. 2. Economic conditions and demographic change are related to diffusion speed. Whether the innovation is an expensive item also has a sizable effect. Finally, products that required large investments in complementary infrastructure (radio, black and white television, color television, cellular telephone) and products for which multiple competing standards were available early on (PCs and VCRs) diffused faster than other products once 5% household penetration had been achieved. 3. Almost all the variance in diffusion speed among the products in this study can be explained by (1) the systematic increase in purchasing power and variations in the business cycle (unemployment), (2) demographic changes, and (3) the changing nature of the products studied (e.g., products with competing standards appear only late in the data set). After controlling for these factors, no systematic trend in diffusion speed remains unaccounted for. These findings are of interest to researchers attempting to identify patterns of difference and similarity among the diffusion paths of many innovations, either by jointly modeling the diffusion of multiple products (as in this study) or by retrospective meta-analysis. The finding that purchasing power, demographics, and the nature of the products capture nearly all the variance is of particular interest. Specifically, one does not need to invoke unobserved changes in tastes and values, as some researchers have done, to account for long-term changes in the speed at which households adopt new products. The findings also suggest that new product diffusion modelers should attempt to control not only for marketing mix variables but also for broader environmental factors. The hierarchical model structure and the findings on the systematic variance in diffusion speed across products are also of interest to forecasting applications when very little or no data are available.Diffusion, New Product Research, Empirical Generalizations, Hierarchical Models, Multilevel Analysis
    corecore