51,358 research outputs found

    Statistical considerations of noninferiority, bioequivalence and equivalence testing in biosimilars studies

    Full text link
    In recent years, the development of follow-on biological products (biosimilars) has received increasing attention. The dissertation covers statistical methods related to three topics of Non-inferiority (NI), Bioequivalence (BE) and Equivalence in demonstrating biosimilarity. For NI, one of the key requirements is constancy assumption, that is, the effect of reference treatment is the same in current NI trials as in historical superiority trials. However if a covariate interacts with the treatment arms, then changes in distribution of this covariate will result in violation of constancy assumption. We propose a modified covariate-adjustment fixed margin method, and recommend it based on its performance characteristics in comparison with other methods. Topic two is related to BE inference for log-normal distributed data. Two drugs are bioequivalent if the difference of a pharmacokinetics (PK) parameter of two products falls within prespecified margins. In the presence of unspecified variances, existing methods like two one-sided tests and Bayesian analysis in BE setting limit our knowledge on the extent that inference of BE is affected by the variability of the PK parameter. We propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for difference of PK parameter. The advantage of the proposed method over existing methods is it helps identify range of variances where BE is more likely to be achieved. In the third topic, we extend the proposed likelihood method for Equivalence inference, where data is often normal distributed. In this part, we demonstrate an additional advantage of the proposed method over current analysis methods such as likelihood ratio test and Bayesian analysis in Equivalence setting. The proposed likelihood method produces results that are same or comparable to current analysis methods in general case when model parameters are independent. However it yields better results in special cases when model parameters are dependent, for example the ratio of variances is directly proportional to the ratio of means. Our research results suggest the proposed likelihood method serves a better alternative than the current analysis methods to address BE/Equivalence inference

    Testing for equivalence: an intersection-union permutation solution

    Full text link
    The notion of testing for equivalence of two treatments is widely used in clinical trials, pharmaceutical experiments,bioequivalence and quality control. It is essentially approached within the intersection-union (IU) principle. According to this principle the null hypothesis is stated as the set of effects lying outside a suitably established interval and the alternative as the set of effects lying inside that interval. The solutions provided in the literature are mostly based on likelihood techniques, which in turn are rather difficult to handle, except for cases lying within the regular exponential family and the invariance principle. The main goal of present paper is to go beyond most of the limitations of likelihood based methods, i.e. to work in a nonparametric setting within the permutation frame. To obtain practical solutions, a new IU permutation test is presented and discussed. A simple simulation study for evaluating its main properties, and three application examples are also presented.Comment: 21 pages, 2 figure

    Testing non-nested structural equation models

    Full text link
    In this paper, we apply Vuong's (1989) likelihood ratio tests of non-nested models to the comparison of non-nested structural equation models. Similar tests have been previously applied in SEM contexts (especially to mixture models), though the non-standard output required to conduct the tests has limited their previous use and study. We review the theory underlying the tests and show how they can be used to construct interval estimates for differences in non-nested information criteria. Through both simulation and application, we then study the tests' performance in non-mixture SEMs and describe their general implementation via free R packages. The tests offer researchers a useful tool for non-nested SEM comparison, with barriers to test implementation now removed.Comment: 24 pages, 6 figure

    Trust and legitimacy across Europe: a FIDUCIA report on comparative public attitudes towards legal authority

    Get PDF
    FIDUCIA (New European Crimes and Trust-based Policy) seeks to shed light on a number of distinctively ‘new European’ criminal behaviours which have emerged in the last decade as a consequence of both technology developments and the increased mobility of populations across Europe. A key objective of FIDUCIA is to propose and proof a ‘trust-based’ policy model in relation to emerging forms of criminality – to explore the idea that public trust and institutional legitimacy are important for the social regulation of the trafficking of human beings, the trafficking of goods, the criminalisation of migration and ethnic minorities, and cybercrimes. In this paper we detail levels of trust and legitimacy in the 26 countries, drawing on data from Round 5 of the European Social Survey. We also conduct a sensitivity analysis that investigates the effect of a lack of measurement equivalence on national estimates

    Statistical modelling of summary values leads to accurate Approximate Bayesian Computations

    Full text link
    Approximate Bayesian Computation (ABC) methods rely on asymptotic arguments, implying that parameter inference can be systematically biased even when sufficient statistics are available. We propose to construct the ABC accept/reject step from decision theoretic arguments on a suitable auxiliary space. This framework, referred to as ABC*, fully specifies which test statistics to use, how to combine them, how to set the tolerances and how long to simulate in order to obtain accuracy properties on the auxiliary space. Akin to maximum-likelihood indirect inference, regularity conditions establish when the ABC* approximation to the posterior density is accurate on the original parameter space in terms of the Kullback-Leibler divergence and the maximum a posteriori point estimate. Fundamentally, escaping asymptotic arguments requires knowledge of the distribution of test statistics, which we obtain through modelling the distribution of summary values, data points on a summary level. Synthetic examples and an application to time series data of influenza A (H3N2) infections in the Netherlands illustrate ABC* in action.Comment: Videos can be played with Acrobat Reader. Manuscript under review and not accepte
    • 

    corecore