123 research outputs found

    Online Bin Covering with Advice

    Get PDF
    The bin covering problem asks for covering a maximum number of bins with an online sequence of nn items of different sizes in the range (0,1](0,1]; a bin is said to be covered if it receives items of total size at least 1. We study this problem in the advice setting and provide tight bounds for the size of advice required to achieve optimal solutions. Moreover, we show that any algorithm with advice of size o(loglogn)o(\log \log n) has a competitive ratio of at most 0.5. In other words, advice of size o(loglogn)o(\log \log n) is useless for improving the competitive ratio of 0.5, attainable by an online algorithm without advice. This result highlights a difference between the bin covering and the bin packing problems in the advice model: for the bin packing problem, there are several algorithms with advice of constant size that outperform online algorithms without advice. Furthermore, we show that advice of size O(loglogn)O(\log \log n) is sufficient to achieve a competitive ratio that is arbitrarily close to 0.533ˉ0.53\bar{3} and hence strictly better than the best ratio 0.50.5 attainable by purely online algorithms. The technicalities involved in introducing and analyzing this algorithm are quite different from the existing results for the bin packing problem and confirm the different nature of these two problems. Finally, we show that a linear number of bits of advice is necessary to achieve any competitive ratio better than 15/16 for the online bin covering problem.Comment: 24 pages, 3 figure

    Methodological criteria for the assessment of moderators in systematic reviews of randomised controlled trials : a consensus study

    Get PDF
    Background: Current methodological guidelines provide advice about the assessment of sub-group analysis within RCTs, but do not specify explicit criteria for assessment. Our objective was to provide researchers with a set of criteria that will facilitate the grading of evidence for moderators, in systematic reviews. Method: We developed a set of criteria from methodological manuscripts (n = 18) using snowballing technique, and electronic database searches. Criteria were reviewed by an international Delphi panel (n = 21), comprising authors who have published methodological papers in this area, and researchers who have been active in the study of sub-group analysis in RCTs. We used the Research ANd Development/University of California Los Angeles appropriateness method to assess consensus on the quantitative data. Free responses were coded for consensus and disagreement. In a subsequent round additional criteria were extracted from the Cochrane Reviewers’ Handbook, and the process was repeated. Results: The recommendations are that meta-analysts report both confirmatory and exploratory findings for subgroups analysis. Confirmatory findings must only come from studies in which a specific theory/evidence based apriori statement is made. Exploratory findings may be used to inform future/subsequent trials. However, for inclusion in the meta-analysis of moderators, the following additional criteria should be applied to each study: Baseline factors should be measured prior to randomisation, measurement of baseline factors should be of adequate reliability and validity, and a specific test of the interaction between baseline factors and interventions must be presented. Conclusions: There is consensus from a group of 21 international experts that methodological criteria to assess moderators within systematic reviews of RCTs is both timely and necessary. The consensus from the experts resulted in five criteria divided into two groups when synthesising evidence: confirmatory findings to support hypotheses about moderators and exploratory findings to inform future research. These recommendations are discussed in reference to previous recommendations for evaluating and reporting moderator studies

    How to spot a statistical problem: advice for a non-statistical reviewer

    Get PDF
    Statistical analyses presented in general medical journals are becoming increasingly sophisticated. BMC Medicine relies on subject reviewers to indicate when a statistical review is required. We consider this policy and provide guidance on when to recommend a manuscript for statistical evaluation. Indicators for statistical review include insufficient detail in methods or results, some common statistical issues and interpretation not based on the presented evidence. Reviewers are required to ensure that the manuscript is methodologically sound and clearly written. Within that context, they are expected to provide constructive feedback and opinion on the statistical design, analysis, presentation and interpretation. If reviewers lack the appropriate background to positively confirm the appropriateness of any of the manuscript’s statistical aspects, they are encouraged to recommend it for expert statistical review

    Prevention of haematoma progression by tranexamic acid in intracerebral haemorrhage patients with and without spot sign on admission scan: a statistical analysis plan of a pre-specified sub-study of the TICH-2 trial

    Get PDF
    Objective We present the statistical analysis plan of a prespecified Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (TICH)-2 sub-study aiming to investigate, if tranexamic acid has a different effect in intracerebral haemorrhage patients with the spot sign on admission compared to spot sign negative patients. The TICH-2 trial recruited above 2000 participants with intracerebral haemorrhage arriving in hospital within 8 h after symptom onset. They were included irrespective of radiological signs of on-going haematoma expansion. Participants were randomised to tranexamic acid versus matching placebo. In this subgroup analysis, we will include all participants in TICH-2 with a computed tomography angiography on admission allowing adjudication of the participants’ spot sign status. Results Primary outcome will be the ability of tranexamic acid to limit absolute haematoma volume on computed tomography at 24 h (± 12 h) after randomisation among spot sign positive and spot sign negative participants, respectively. Within all outcome measures, the effect of tranexamic acid in spot sign positive/negative participants will be compared using tests of interaction. This sub-study will investigate the important clinical hypothesis that spot sign positive patients might benefit more from administration of tranexamic acid compared to spot sign negative patients

    Cumulative subgroup analysis to reduce waste in clinical research for individualised medicine

    Get PDF
    Background: Although subgroup analyses in clinical trials may provide evidence for individualised medicine, their conduct and interpretation remain controversial. Methods: Subgroup effect can be defined as the difference in treatment effect across patient subgroups. Cumulative subgroup analysis refers to a series of repeated pooling of subgroup effects after adding data from each of related trials chronologically, to investigate the accumulating evidence for subgroup effects. We illustrated the clinical relevance of cumulative subgroup analysis in two case studies using data from published individual patient data (IPD) meta-analyses. Computer simulations were also conducted to examine the statistical properties of cumulative subgroup analysis. Results: In case study 1, an IPD meta-analysis of 10 randomised trials (RCTs) on beta blockers for heart failure reported significant interaction of treatment effects with baseline rhythm. Cumulative subgroup analysis could have detected the subgroup effect 15 years earlier, with five fewer trials and 71% less patients, than the IPD meta-analysis which first reported it. Case study 2 involved an IPD meta-analysis of 11 RCTs on treatments for pulmonary arterial hypertension that reported significant subgroup effect by aetiology. Cumulative subgroup analysis could have detected the subgroup effect 6 years earlier, with three fewer trials and 40% less patients than the IPD meta-analysis. Computer simulations have indicated that cumulative subgroup analysis increases the statistical power and is not associated with inflated false positives. Conclusions: To reduce waste of research data, subgroup analyses in clinical trials should be more widely conducted and adequately reported so that cumulative subgroup analyses could be timely performed to inform clinical practice and further research

    Estimating measures of interaction on an additive scale for preventive exposures

    Get PDF
    Measures of interaction on an additive scale (relative excess risk due to interaction [RERI], attributable proportion [AP], synergy index [S]), were developed for risk factors rather than preventive factors. It has been suggested that preventive factors should be recoded to risk factors before calculating these measures. We aimed to show that these measures are problematic with preventive factors prior to recoding, and to clarify the recoding method to be used to circumvent these problems. Recoding of preventive factors should be done such that the stratum with the lowest risk becomes the reference category when both factors are considered jointly (rather than one at a time). We used data from a case-control study on the interaction between ACE inhibitors and the ACE gene on incident diabetes. Use of ACE inhibitors was a preventive factor and DD ACE genotype was a risk factor. Before recoding, the RERI, AP and S showed inconsistent results (RERI = 0.26 [95%CI: −0.30; 0.82], AP = 0.30 [95%CI: −0.28; 0.88], S = 0.35 [95%CI: 0.02; 7.38]), with the first two measures suggesting positive interaction and the third negative interaction. After recoding the use of ACE inhibitors, they showed consistent results (RERI = −0.37 [95%CI: −1.23; 0.49], AP = −0.29 [95%CI: −0.98; 0.40], S = 0.43 [95%CI: 0.07; 2.60]), all indicating negative interaction. Preventive factors should not be used to calculate measures of interaction on an additive scale without recoding

    Tamoxifen is not effective in good prognosis patients with hepatocellular carcinoma

    Get PDF
    BACKGROUND: Large randomised clinical trials and systematic reviews substantiate that tamoxifen is ineffective in improving survival of patients with hepatocellular carcinoma (HCC). However, a recent report suggested that the drug might prolong survival among patients with well preserved liver function. The aim of this paper is to validate this hypothesis. METHODS: We used the updated database of the phase 3 randomised CLIP-1 trial that compared tamoxifen with supportive therapy. Primary endpoint was overall survival. Treatment arms were compared within strata defined according to the Okuda stage and the CLIP-score. Survival differences were tested by the Log-rank test. RESULTS: Tamoxifen was not effective in prolonging survival in Okuda I-II subgroup (p = 0.501). Median survival times were equal to 16.8 (95%CI 12.7–18.5) months for tamoxifen and 16.8 (95%CI 13.5–22.4) months for the control arms; 1-year survival probabilities were equal to 58.8% (95%CI 51.7–65.8) and 59.4 (95%CI 52.5–66.2), respectively. Similar results were observed in the better CLIP subgroup (score 0/1), without evidence of difference between the two treatment arms (p = 0.734). Median survival times were equal to 29.2 (95%CI 20.1–36.4) months with tamoxifen and 29.0 (95%CI 23.3–35.2) months without; 1-year survival probabilities were equal to 80.9% (95%CI 72.5–89.3) with tamoxifen and 77.1% (95%CI 68.6–85.7) for the control arm. CONCLUSION: The recent suggestion that tamoxifen might be effective in the subgroup of patients with better prognosis is not supported by a reanalysis of the CLIP-1 trial. Tamoxifen should no longer be considered for the treatment of HCC patients and future trials of medical treatment should concentrate on different drugs

    GEIRA: gene-environment and gene–gene interaction research application

    Get PDF
    The GEIRA (Gene-Environment and Gene–Gene Interaction Research Application) algorithm and subsequent program is dedicated to genome-wide gene-environment and gene–gene interaction analysis. It implements concepts of both additive and multiplicative interaction as well as calculations based on dominant, recessive and co-dominant genetic models, respectively. Estimates of interactions are incorporated in a single table to make the output easily read. The algorithm is coded in both SAS and R. GEIRA is freely available to non-commercial users at http://www.epinet.se. Additional information, including user’s manual and example datasets is available online at http://www.epinet.se
    corecore