44 research outputs found

    Prediction intervals for all of M future observations based on linear random effects models

    Get PDF
    In many pharmaceutical and biomedical applications such as assay validation, assessment of historical control data, or the detection of anti-drug antibodies, the calculation and interpretation of prediction intervals (PI) is of interest. The present study provides two novel methods for the calculation of prediction intervals based on linear random effects models and restricted maximum likelihood (REML) estimation. Unlike other REML-based PI found in the literature, both intervals reflect the uncertainty related with the estimation of the prediction variance. The first PI is based on Satterthwaite approximation. For the other PI, a bootstrap calibration approach that we will call quantile-calibration was used. Due to the calibration process this PI can be easily computed for more than one future observation and based on balanced and unbalanced data as well. In order to compare the coverage probabilities of the proposed PI with those of four intervals found in the literature, Monte Carlo simulations were run for two relatively complex random effects models and a broad range of parameter settings. The quantile-calibrated PI was implemented in the statistical software R and is available in the predint package

    Prediction intervals based on historical control data obtained from bioassays

    Get PDF
    Die Berechnung von Vorhersageintervallen auf derGrundlage von historischen Kontrolldaten aus Bioassays ist in vielen Bereichen der biologischen Forschung von Interesse. Bei pharmazeutischen und präklinischen Anwendungen, wie z. B. Immonogenitätstests, ist die Berechnung von Vorhersageintervallen (oder oberen Vorhersagegrenzen), die zwischen anti-drug Antikörper positiven Patienten und anti-drug Antikörper negativen Patienten unterscheiden, von Interesse. In der (Öko-)Toxikologie werden verschiedene Bioassays angewendet, um die toxikologischen Eigenschaften einer bestimmten chemischen Verbindung anModellorganismen zu untersuchen (z. B. ihre Karzinogenität oder ihre Auswirkungen auf aquatische Nahrungsketten). In diesem Forschungsbereich ist es von Interesse zu überprüfen, ob das Ergebnis der aktuellen unbehandelten Kontrolle (oder der gesamten aktuellen Studie) mit den historischen Informationen übereinstimmt. Zu diesem Zweck können Vorhersageintervalle auf der Grundlage von historischen Kontrolldaten berechnet werden. Wenn die aktuellen Beobachtungen im Vorhersageintervall liegen, kann davon ausgegangen werden, dass sie mit den historischen Informationen übereinstimmen. Das erste Kapitel dieser Arbeit gibt einen detaillierten Überblick über die Verwendung von historischen Kontrolldaten im Rahmen von biologischen Versuchen. Darüber hinaus wird ein Überblick über die Datenstruktur (dichotome Daten, Zähldaten, kontinuierliche Daten) und die Modelle, auf denen die vorgeschlagenen Vorhersageintervalle basieren, gegeben. Im Zusammenhang mit dichotomen Daten oder Zähldaten wird besonderes Augenmerk auf Überdispersion gelegt, die in Daten mit biologischem Hintergrund häufig vorkommt, in der Literatur zu Vorhersageintervallen jedoch meist nicht berücksichtigt wird. Daher wurden Vorhersageintervalle für eine zukünftige Beobachtung vorgeschlagen, die auf überdispersen Binomialdaten beruhen. Die Überdeckungswahrscheinlichkeiten dieser Intervalle wurden auf der Grundlage von Monte-Carlo-Simulationen bewertet und lagen wesentlich näher am nominellen Level als die in der Literatur gefundenen Vorhersageintervalle, die keineÜberdispersion berücksichtigen (siehe Abschnitte 2.1 und 2.2). In mehreren Anwendungen ist die abhängige Variable kontinuierlich und wird als normalverteilt angenommen. Dennoch können die Daten durch verschiedene Zufallsfaktoren (zum Beispiel unterschiedliche Labore die Proben von mehreren Patienten analysieren) beeinflusst werden. In diesem Fall können die Daten durch lineareModelle mit zufälligen Effekten modelliert werden, bei denen Parameterschätzer mittels Restricted- Maximum-Likelihood Verfahren geschätztwerden. Für dieses Szenariowerden in Abschnitt 2.3 zwei Vorhersageintervalle vorgeschlagen. Eines dieser vorgeschlagenen Intervalle basiert auf einem Bootstrap- Kalibrierungsverfahren, das es auch in Fällen anwendbar macht, in denen ein Vorhersageintervall für mehr als eine zukünftige Beobachtung benötigt wird. Abschnitt 2.4 beschreibt das R-Paket predint, in dem das in Abschnitt 2.3 beschriebene bootstrap-kalibrierte Vorhersageintervall (sowie untere und obere Vorhersagegrenzen) implementiert ist. Darüber hinaus sind Vorhersageintervalle für mindestens eine zukünftige Beobachtung für überdisperse Binomial- oder Zähldaten implementiert. Der Kern dieser Arbeit besteht in der Berechnung von Vorhersageintervallen für eine oder mehrere zukünftige Beobachtungen, die auf überdispersen Binomialdaten, überdispersen Zähldaten oder linearen Modellen mit zufälligen Effekten basieren. Nach Kenntnis des Autors ist dies das erste Mal, dass Vorhersageintervalle, die Überdispersion berücksichtigen, vorgeschlagen werden. Darüber hinaus ist "predint" das erste über CRAN verfügbare R-Paket, das Funktionen für die Anwendung von Vorhersageintervallen für die genanntenModelle bereitstellt. Somit ist die in dieser Arbeit vorgeschlageneMethodik öffentlich zugänglich und kann von anderen Forschenden leicht angewendet werden

    A STATISTICAL APPROACH BASED ON THE TOTAL ERROR CONCEPT FOR VALIDATION THE BIOANALYTICAL METHOD: APPLICATION TO THE SPECTROPHOTOMETRIC DETERMINATION OF TRACES AMOUNT OF ACETAMINOPHEN IN HUMAN PLASMA

    Get PDF
    Objective: The use of the classical approach of analytical validation, in practice or in the literature, is common. However, statistical verification, that looks separately the two errors (such as bias and precision) to make a decision, presents a risk to declare that an analytical method is valid while it is not, or conversely. To minimize this risk, a new approach based on the concept of total error was proposed. Methods: This approach proposes a calculation the two sided tolerance interval by combining the two errors; bias and precision, in order to examine the validity of an analytical and bioanalytical method at each concentration level. In this paper, we aim to demonstrate the applicability and simplicity of the both methods based on the total error approach: accuracy profile and uncertainty profile. This study will be illustrated by validation case of a spectrophotometric method for the determination of trace amounts of acetaminophen in human plasma. Results: After the introduction of the correction coefficient which is worth 1.16, the results obtained with accuracy profile approach show clearly that the bioanalytical method is valid over a concentrations range of [100.34- 500] µg mL-1 since the upper and lower 90%-expectation tolerance limits have fallen within the two acceptance limits of ± 20%. The same results found using the uncertainty profile approach because the "two - sided 66.7%-content, 90% -confidence tolerance intervals "are found within two acceptance limits of ± 20% over the range of [170; 500] µm mL-1. Conclusion: The excellence of the total error approach was showen since it enables successfully to validate the analytical procedure as well the calculation of the measurement uncertainty at each concentration level

    Confidence Interval Estimation for Continuous Outcomes in Cluster Randomization Trials

    Get PDF
    Cluster randomization trials are experiments where intact social units (e.g. hospitals, schools, communities, and families) are randomized to the arms of the trial rather than individuals. The popularity of this design among health researchers is partially due to reduced contamination of treatment effects and convenience. However, the advantages of cluster randomization trials come with a price. Due to the dependence of individuals within a cluster, cluster randomization trials suffer reduced statistical efficiency and often require a complex analysis of study outcomes. The primary purpose of this thesis is to propose new confidence intervals for effect measures commonly of interest for continuous outcomes arising from cluster randomization trials. Specifically, we construct new confidence intervals for the difference between two normal means, the difference between two lognormal means, and the exceedance probability. The proposed confidence intervals, which use the method of variance estimates recovery (MOVER), do not make certain assumptions that existing procedures make on the data. For instance, symmetry is not forced when the sampling distribution of the parameter estimate is skewed and the assumption of homoscedasticity is not made. Furthermore, the MOVER results in simple confidence interval procedures rather than complex simulation-based methods which currently exist. Simulation studies are used to investigate the small sample properties of the MOVER as compared with existing procedures. Unbalanced cluster sizes are simulated, with an average range of 50 to 200 individuals per cluster and 6 to 24 clusters per arm. The effects of various degrees of dependence between individuals within the same cluster are also investigated. When comparing the empirical coverage, tail errors, and median widths of confidence interval procedures, the MOVER has coverage close to the nominal, relatively balanced tail errors, and narrow widths as compared to existing procedure for the majority of the parameter combinations investigated. Existing data from cluster randomization trials are then used to illustrate each of the methods

    Measurement properties of the Dutch Unite Rhumatologique des Affections de la Main and its ability to measure change due to Dupuytren's disease progression compared with the Michigan Hand outcomes Questionnaire

    Get PDF
    Data of a prospective longitudinal cohort study including 233 Dupuytren’s patients was used to determine: (1) whether the Unité Rhumatologique des Affections de la Main scale and Michigan Hand outcomes Questionnaire can detect change in hand function due to Dupuytren’s disease progression and to compare their abilities; (2) the concurrent validity, reliability, responsiveness and interpretability of the Dutch Unité Rhumatologique des Affections de la Main. The Unité Rhumatologique des Affections de la Main and Michigan Hand outcomes Questionnaire had comparable measurement properties, and were both able to distinguish participants with disease progression from those without progression (resp. U = 1252.5, p = 0.008, and U = 1086.0, p < 0.001), but only at a group level. Individual cases of progression could not be detected using these outcome measures, as indicated by the fact that the smallest detectable change was larger than the minimal important change, and area under the receiver operating curve (AUC) values of 0.75 for Michigan Hand outcomes Questionnaire and 0.67 for Unité Rhumatologique des Affections de la Main. Level of evidence: I

    Multiple contrast tests with repeated and multiple endpoints : with biological applications

    Get PDF
    [no abstract

    Statistical aspects of bioequivalence assessment in the pharmaceutical industry.

    Get PDF
    Since the early 1990's, average bioequivalence studies have served as the international standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile when used in the marketplace. Population (PBE) and Individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980's. Guidance has been proposed by the Food and Drug Administration of the United States government for the implementation of these techniques in the pioneer and generic pharmaceutical industries. As of the present time, no consensus among regulators, academia, and industry has been established. The need for more stringent population and individual bioequivalence has not been demonstrated, and it is known that the criteria proposed by FDA are actually less stringent under certain conditions. The properties of method-of-moments and restricted maximum likelihood modelling in replicate designs will be explored in Chapter 2, and the application of these techniques in the assessment of average bioequivalence will be considered. Individual and population bioequivalence criteria in replicate cross-over designs will be explored in Chapters 3 and 4, respectively, and retrospective data analysis will be used to characterise the properties and behaviour of the metrics. Simulation experiments will be conducted in Chapter 5 to address questions arising from the retrospective data analyses in Chapters 2 through 4. Additionally, simulation will be used to explore of a potential phenomenon known as 'bio-creep' - that is the transitivity of individual bioequivalence in practice. Another bioequivalence problem is then considered to conclude the thesis; that of compaxing rate and extent of exposure between differing ethnic groups as described in ICH-E5 (1998). The properties of the population bioequivalence metric and an alternative metric will be characterised in small and large samples from parallel group studies. Inference will be illustrated using data from a recent submission and simulation studies

    Vol. 2, No. 2 (Full Issue)

    Get PDF

    Improving The Diagnosis And Risk Stratification Of Prostate Cancer

    Get PDF
    The current diagnostic and stratification pathway for prostate cancer has led to over-diagnosis and over- treatment. This thesis aims to improve the prostate cancer diagnosis pathway by developing a minimally invasive blood test to inform diagnosis alongside mpMRI and to understand the true Gleason 4 burden which will help better stratify disease and guide clinicians in treatment planning. To reduce the number of patients who have to undergo prostate biopsy after indeterminate or false positive prostate mpMRI, we aimed to develop a new panel of mRNA detectable in blood or urine that was able to improve the detection of clinical significant prostate cancer (Gleason 4+3 or ≥6mm) in combination with prostate mpMRI. mRNA expression of 28 potential genes was studied in four prostate cancer cell lines and, using publicly available datasets, a new seven gene biomarker panel was developed using machine learning techniques. The signature was then validated in blood and urine samples from the ProMPT, PROMIS and INNOVATE trials. To redefine the classification of Gleason 4 disease in prostate cancer patients, digital pathology was used to contour and accurately assess the burden and spread of Gleason 4 in a cohort of PROMIS patients compared to the gold standard manual pathology. There was a significant difference between observed and objective Gleason 4 burden that has implications in patient risk stratification and biomarker discovery. The work presented in this thesis makes a significant step toward improving the patient diagnostic and risk classification pathways by ensuring only the right patients are biopsied when necessary, improving the current pathological reference standard
    corecore