178 research outputs found

    Comparison of Urine Output among Patients Treated with More Intensive Versus Less Intensive RRT: Results from the Acute Renal Failure Trial Network Study

    Get PDF
    Intensive RRT may have adverse effects that account for the absence of benefit observed in randomized trials of more intensive versus less intensive RRT. We wished to determine the association of more intensive RRT with changes in urine output as a marker of worsening residual renal function in critically ill patients with severe AKI

    Behavioral deficits, early gliosis, dysmyelination and synaptic dysfunction in a mouse model of mucolipidosis IV

    Get PDF
    Mucolipidosis IV (MLIV) is caused by mutations in the gene MCOLN1. Patients with MLIV have severe neurologic deficits and very little is known about the brain pathology in this lysosomal disease. Using an accurate mouse model of mucolipidosis IV, we observed early behavioral deficits which were accompanied by activation of microglia and astrocytes. The glial activation that persisted during the course of disease was not accompanied by neuronal loss even at the late stage. In vivo [Ca2+]-imaging revealed no changes in resting [Ca2+] levels in Mcoln1−/− cortical neurons, implying their physiological health. Despite the absence of neuron loss, we observed alterations in synaptic plasticity, as indicated by elevated paired-pulse facilitation and enhanced long-term potentiation. Myelination deficits and severely dysmorphic corpus callosum were present early and resembled white matter pathology in mucolipidosis IV patients. These results indicate the early involvement of glia, and challenge the traditional view of mucolipidosis IV as an overtly neurodegenerative condition. Electronic supplementary material The online version of this article (doi:10.1186/s40478-014-0133-7) contains supplementary material, which is available to authorized users

    Designs for clinical trials with time-to-event outcomes based on stopping guidelines for lack of benefit

    Get PDF
    <p>Abstract</p> <p>background</p> <p>The pace of novel medical treatments and approaches to therapy has accelerated in recent years. Unfortunately, many potential therapeutic advances do not fulfil their promise when subjected to randomized controlled trials. It is therefore highly desirable to speed up the process of evaluating new treatment options, particularly in phase II and phase III trials. To help realize such an aim, in 2003, Royston and colleagues proposed a class of multi-arm, two-stage trial designs intended to eliminate poorly performing contenders at a first stage (point in time). Only treatments showing a predefined degree of advantage against a control treatment were allowed through to a second stage. Arms that survived the first-stage comparison on an intermediate outcome measure entered a second stage of patient accrual, culminating in comparisons against control on the definitive outcome measure. The intermediate outcome is typically on the causal pathway to the definitive outcome (i.e. the features that cause an intermediate event also tend to cause a definitive event), an example in cancer being progression-free and overall survival. Although the 2003 paper alluded to multi-arm trials, most of the essential design features concerned only two-arm trials. Here, we extend the two-arm designs to allow an arbitrary number of stages, thereby increasing flexibility by building in several 'looks' at the accumulating data. Such trials can terminate at any of the intermediate stages or the final stage.</p> <p>Methods</p> <p>We describe the trial design and the mathematics required to obtain the timing of the 'looks' and the overall significance level and power of the design. We support our results by extensive simulation studies. As an example, we discuss the design of the STAMPEDE trial in prostate cancer.</p> <p>Results</p> <p>The mathematical results on significance level and power are confirmed by the computer simulations. Our approach compares favourably with methodology based on beta spending functions and on monitoring only a primary outcome measure for lack of benefit of the new treatment.</p> <p>Conclusions</p> <p>The new designs are practical and are supported by theory. They hold considerable promise for speeding up the evaluation of new treatments in phase II and III trials.</p

    Shipment Impairs Lymphocyte Proliferative Responses to Microbial Antigens

    Get PDF
    Lymphocyte proliferation assays (LPAs) are widely used to assess T-lymphocyte function of patients with human immunodeficiency virus infection and other primary and secondary immunodeficiency disorders. Since these assays require expertise not readily available at all clinical sites, specimens may be shipped to central labs for testing. We conducted a large multicenter study to evaluate the effects of shipping on assay performance and found significant loss of LPA activity. This may lead to erroneous results for individual subjects and introduce bias into multicenter trials

    Ten years of marketing approvals of anticancer drugs in Europe: regulatory policy and guidance documents need to find a balance between different pressures

    Get PDF
    Despite important progress in understanding the molecular factors underlying the development of cancer and the improvement in response rates with new drugs, long-term survival is still disappointing for most common solid tumours. This might be because very little of the modest gain for patients is the result of the new compounds discovered and marketed recently. An assessment of the regulatory agencies' performance may suggest improvements. The present analysis summarizes and evaluates the type of studies and end points used by the EMEA to approve new anticancer drugs, and discusses the application of current regulations. This report is based on the information available on the EMEA web site. We identified current regulatory requirements for anticancer drugs promulgated by the agency and retrieved them in the relevant directory; information about empirical evidence supporting the approval of drugs for solid cancers through the centralised procedure were retrieved from the European Public Assessment Report (EPAR). We surveyed documents for drug applications and later extensions from January 1995, when EMEA was set up, to December 2004. We identified 14 anticancer drugs for 27 different indications (14 new applications and 13 extensions). Overall, 48 clinical studies were used as the basis for approval; randomised comparative (clinical) trial (RCT) and Response Rate were the study design and end points most frequently adopted (respectively, 25 out of 48 and 30 out of 48). In 13 cases, the EPAR explicitly reported differences between arms in terms of survival: the range was 0–3.7 months, and the mean and median differences were 1.5 and 1.2 months. The majority of studies (13 out of 27, 48%) involved the evaluation of complete and/or partial tumour responses, with regard to the end points supporting the 27 indications. Despite the recommendations of the current EMEA guidance documents, new anticancer agents are still often approved on the basis of small single arm trials that do not allow any assessment of an ‘acceptable and extensively documented toxicity profile' and of end points such as response rate, time to progression or progression-free survival which at best can be considered indicators of anticancer activity and are not ‘justified surrogate markers for clinical benefit'. Anticipating an earlier than ideal point along the drug approval path and the use of not fully validated surrogate end points in nonrandomised trials looks like a dangerous shortcut that might jeopardise consumers' health, leading to unsafe and ineffective drugs being marketed and prescribed. The present Note for Guidance for new anticancer agents needs revising. Drugs must be rapidly released for patients who need them but not be at the expense of adequate knowledge about the real benefit of the drugs

    Genome-wide association filtering using a highly locus-specific transmission/disequilibrium test

    Get PDF
    Multimarker transmission/disequilibrium tests (TDTs) are powerful association and linkage tests used to perform genome-wide filtering in the search for disease susceptibility loci. In contrast to case/control studies, they have a low rate of false positives for population stratification and admixture. However, the length of a region found in association with a disease is usually very large because of linkage disequilibrium (LD). Here, we define a multimarker proportional TDT (mTDTP) designed to improve locus specificity in complex diseases that has good power compared to the most powerful multimarker TDTs. The test is a simple generalization of a multimarker TDT in which haplotype frequencies are used to weight the effect that each haplotype has on the whole measure. Two concepts underlie the features of the metric: the ‘common disease, common variant’ hypothesis and the decrease in LD with chromosomal distance. Because of this decrease, the frequency of haplotypes in strong LD with common disease variants decreases with increasing distance from the disease susceptibility locus. Thus, our haplotype proportional test has higher locus specificity than common multimarker TDTs that assume a uniform distribution of haplotype probabilities. Because of the common variant hypothesis, risk haplotypes at a given locus are relatively frequent and a metric that weights partial results for each haplotype by its frequency will be as powerful as the most powerful multimarker TDTs. Simulations and real data sets demonstrate that the test has good power compared with the best tests but has remarkably higher locus specificity, so that the association rate decreases at a higher rate with distance from a disease susceptibility or disease protective locus

    Molecular characterisation of ERG, ETV1 and PTEN gene loci identifies patients at low and high risk of death from prostate cancer

    Get PDF
    BACKGROUND: The discovery of ERG/ETV1 gene rearrangements and PTEN gene loss warrants investigation in a mechanism-based prognostic classification of prostate cancer (PCa). The study objective was to evaluate the potential clinical significance and natural history of different disease categories by combining ERG/ETV1 gene rearrangements and PTEN gene loss status. METHODS: We utilised fluorescence in situ hybridisation (FISH) assays to detect PTEN gene loss and ERG/ETV1 gene rearrangements in 308 conservatively managed PCa patients with survival outcome data. RESULTS: ERG/ETV1 gene rearrangements alone and PTEN gene loss alone both failed to show a link to survival in multivariate analyses. However, there was a strong interaction between ERG/ETV1 gene rearrangements and PTEN gene loss (P<0.001). The largest subgroup of patients (54%), lacking both PTEN gene loss and ERG/ETV1 gene rearrangements comprised a 'good prognosis' population exhibiting favourable cancer-specific survival (85.5% alive at 11 years). The presence of PTEN gene loss in the absence of ERG/ETV1 gene rearrangements identified a patient population (6%) with poorer cancer-specific survival that was highly significant (HR=4.87, P<0.001 in multivariate analysis, 13.7% survival at 11 years) when compared with the 'good prognosis' group. ERG/ETV1 gene rearrangements and PTEN gene loss status should now prospectively be incorporated into a predictive model to establish whether predictive performance is improved. CONCLUSIONS: Our data suggest that FISH studies of PTEN gene loss and ERG/ETV1 gene rearrangements could be pursued for patient stratification, selection and hypothesis-generating subgroup analyses in future PCa clinical trials and potentially in patient management

    Statistical design of personalized medicine interventions: The Clarification of Optimal Anticoagulation through Genetics (COAG) trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is currently much interest in pharmacogenetics: determining variation in genes that regulate drug effects, with a particular emphasis on improving drug safety and efficacy. The ability to determine such variation motivates the application of personalized drug therapies that utilize a patient's genetic makeup to determine a safe and effective drug at the correct dose. To ascertain whether a genotype-guided drug therapy improves patient care, a personalized medicine intervention may be evaluated within the framework of a randomized controlled trial. The statistical design of this type of personalized medicine intervention requires special considerations: the distribution of relevant allelic variants in the study population; and whether the pharmacogenetic intervention is equally effective across subpopulations defined by allelic variants.</p> <p>Methods</p> <p>The statistical design of the Clarification of Optimal Anticoagulation through Genetics (COAG) trial serves as an illustrative example of a personalized medicine intervention that uses each subject's genotype information. The COAG trial is a multicenter, double blind, randomized clinical trial that will compare two approaches to initiation of warfarin therapy: genotype-guided dosing, the initiation of warfarin therapy based on algorithms using clinical information and genotypes for polymorphisms in <it>CYP2C9 </it>and <it>VKORC1</it>; and clinical-guided dosing, the initiation of warfarin therapy based on algorithms using only clinical information.</p> <p>Results</p> <p>We determine an absolute minimum detectable difference of 5.49% based on an assumed 60% population prevalence of zero or multiple genetic variants in either <it>CYP2C9 </it>or <it>VKORC1 </it>and an assumed 15% relative effectiveness of genotype-guided warfarin initiation for those with zero or multiple genetic variants. Thus we calculate a sample size of 1238 to achieve a power level of 80% for the primary outcome. We show that reasonable departures from these assumptions may decrease statistical power to 65%.</p> <p>Conclusions</p> <p>In a personalized medicine intervention, the minimum detectable difference used in sample size calculations is not a known quantity, but rather an unknown quantity that depends on the genetic makeup of the subjects enrolled. Given the possible sensitivity of sample size and power calculations to these key assumptions, we recommend that they be monitored during the conduct of a personalized medicine intervention.</p> <p>Trial Registration</p> <p>clinicaltrials.gov: NCT00839657</p

    Sample Reproducibility of Genetic Association Using Different Multimarker TDTs in Genome-Wide Association Studies: Characterization and a New Approach

    Get PDF
    Multimarker Transmission/Disequilibrium Tests (TDTs) are very robust association tests to population admixture and structure which may be used to identify susceptibility loci in genome-wide association studies. Multimarker TDTs using several markers may increase power by capturing high-degree associations. However, there is also a risk of spurious associations and power reduction due to the increase in degrees of freedom. In this study we show that associations found by tests built on simple null hypotheses are highly reproducible in a second independent data set regardless the number of markers. As a test exhibiting this feature to its maximum, we introduce the multimarker -Groups TDT (), a test which under the hypothesis of no linkage, asymptotically follows a distribution with degree of freedom regardless the number of markers. The statistic requires the division of parental haplotypes into two groups: disease susceptibility and disease protective haplotype groups. We assessed the test behavior by performing an extensive simulation study as well as a real-data study using several data sets of two complex diseases. We show that test is highly efficient and it achieves the highest power among all the tests used, even when the null hypothesis is tested in a second independent data set. Therefore, turns out to be a very promising multimarker TDT to perform genome-wide searches for disease susceptibility loci that may be used as a preprocessing step in the construction of more accurate genetic models to predict individual susceptibility to complex diseases

    Shipping blood to a central laboratory in multicenter clinical trials: effect of ambient temperature on specimen temperature, and effects of temperature on mononuclear cell yield, viability and immunologic function

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical trials of immunologic therapies provide opportunities to study the cellular and molecular effects of those therapies and may permit identification of biomarkers of response. When the trials are performed at multiple centers, transport and storage of clinical specimens become important variables that may affect lymphocyte viability and function in blood and tissue specimens. The effect of temperature during storage and shipment of peripheral blood on subsequent processing, recovery, and function of lymphocytes is understudied and represents the focus of this study.</p> <p>Methods</p> <p>Peripheral blood samples (n = 285) from patients enrolled in 2 clinical trials of a melanoma vaccine were shipped from clinical centers 250 or 1100 miles to a central laboratory at the sponsoring institution. The yield of peripheral blood mononuclear cells (PBMC) collected before and after cryostorage was correlated with temperatures encountered during shipment. Also, to simulate shipping of whole blood, heparinized blood from healthy donors was collected and stored at 15°C, 22°C, 30°C, or 40°C, for varied intervals before isolation of PBMC. Specimen integrity was assessed by measures of yield, recovery, viability, and function of isolated lymphocytes. Several packaging systems were also evaluated during simulated shipping for the ability to maintain the internal temperature in adverse temperatures over time.</p> <p>Results</p> <p>Blood specimen containers experienced temperatures during shipment ranging from -1 to 35°C. Exposure to temperatures above room temperature (22°C) resulted in greater yields of PBMC. Reduced cell recovery following cryo-preservation as well as decreased viability and immune function were observed in specimens exposed to 15°C or 40°C for greater than 8 hours when compared to storage at 22°C. There was a trend toward improved preservation of blood specimen integrity stored at 30°C prior to processing for all time points tested. Internal temperatures of blood shipping containers were maintained longer in an acceptable range when warm packs were included.</p> <p>Conclusions</p> <p>Blood packages shipped overnight by commercial carrier may encounter extreme seasonal temperatures. Therefore, considerations in the design of shipping containers should include protecting against extreme ambient temperature deviations and maintaining specimen temperature above 22°C or preferably near 30°C.</p
    corecore