1,865 research outputs found

    Multiple testing problems in classical clinical trial and adaptive designs

    Full text link
    Multiplicity issues arise prevalently in a variety of situations in clinical trials and statistical methods for multiple testing have gradually gained importance with the increasing number of complex clinical trial designs. In general, two types of multiple testing can be performed (Dmitrienko et al., 2009): union-intersection testing (UIT) and intersection-union testing (IUT). The UIT is of the interest in this dissertation. Thus, the familywise error rate (FWER) is required to be controlled in the strong sense. A number of methods have been developed for controlling the FWER, including single-step and stepwise procedures. In single-step approaches, such as the simple Bonferroni method, the rejection decision of a hypothesis does not depend on the decision of any other hypotheses. Single-step approaches can be improved in terms of power through stepwise approaches, while also controlling for the desired error rate. Besides, it is also possible to improve those procedures by a parametric approach. In the first project, we developed a new and powerful single-step progressive parametric multiple (SPPM) testing procedure for correlated normal test statistics. Through simulation studies, we demonstrate that SPPM improves power substantially when the correlation is moderate and/or the magnitude of eect sizes are similar. Group sequential designs (GSD) are clinical trials allowing interim looks with the possibility of early terminations due to ecacy, harm or futility, which can reduce the overall costs and timelines for the development of a new drug. However, repeated looks of data also have multiplicity issues and could inflate the type I error rate. The proper treatments to the error inflation have been discussed widely (Pocock, 1977), (O'Brien and Fleming, 1979), (Wang and Tsiatis, 1987), (Lan and DeMets, 1983). Most literature about GSD focuses on a single endpoint. GSD with multiple endpoints however, has also received considerable attention. The main focus of our second project is a GSD with multiple primary endpoints, in which the trial is to evaluate whether at least one of the endpoints is statistically signicant. In this study design, multiplicity issues arise from repeated interims and multiple endpoints. Therefore, the appropriate adjustments must be made to control the Type I error rate. Our second purpose here is to show that the combination of multiple endpoint and repeated interim analyses can lead to a more powerful design. Via the multivariate normal distribution, a method that allows for simultaneously consideration of interim analyses and all clinical endpoints was proposed. The new approach is derived from the closure principle, thus it can control type I error rate strongly. We evaluate the power under dierent scenarios and show that it compares favorably to other methods when correlation among endpoints is non-zero. In the group sequential design framework, another interesting topic is multiple arm multiple stage design (MAMS), where multiple arms are involved in the trial at the beginning with the flexibility about treatment selection or stopping decisions during the interim analyses. One of major hurdles of MAMS is the computational cost with the increasing number of arms and interim looks. Various designs were implemented to overcome this diculty (Thall et al., 1988; Schaid et al., 1990; Follmann et al., 1994; Stallard and Todd, 2003; Stallard and Friede, 2008; Magirr et al., 2012; Wason et al., 2017), but also control the FWER with the potential inflation from the multiple arm comparisons and multiple interim tests. Here, we consider a more flexible drop-the-loser design allowing the safety information in the treatment selection without a pre-specied dropping-arms mechanism and it still retains reasonable high power. The two dierent types of stopping boundaries are proposed for such a design. A sample size is also adjustable if the winner arm is dropped due to the safety considerations

    Ground state phase diagram of the repulsive fermionic t−t′t-t^{\prime} Hubbard model on the square lattice from weak-coupling

    Get PDF
    We obtain a complete and exact in the weak-coupling limit (U→0U \rightarrow 0) ground state phase diagram of the repulsive fermionic Hubbard model on the square lattice for filling factors 0<n<20 < n < 2 and next-nearest-neighbour hopping amplitudes 0≤t′≤0.50 \le t^{\prime} \le 0.5. Phases are distinguished by the symmetry and the number of nodes of the superfluid order parameter. The phase diagram is richer than may be expected and typically features states with a high --- higher than that of the fundamental mode of the corresponding irreducible representation --- number of nodes. The effective coupling strength in the Cooper channel λ\lambda, which determines the critical temperature TcT_c of the superfluid transition, is calculated in the whole parameter space and regions with high values of λ\lambda are identified. It is shown that besides the expected increase of λ\lambda near the Van Hove singularity line, joining the ferromagnetic and antiferromagnetic points, another region with high values of λ\lambda can be found at quarter filling and t′=0.5t^{\prime}=0.5 due to the presence of a line of nesting at t′≥0.5t^{\prime} \ge 0.5. The results can serve as benchmarks for controlled non-perturbative methods and guide the ongoing search for high-TcT_c superconductivity in the Hubbard model.Comment: 11 Pages, 9 Figure

    Optimal strategy for selling on group-buying website

    Get PDF
    Purpose: The purpose of this paper is to help business marketers with offline channels to make decisions on whether to sell through Group-buying (GB) websites and how to set online price with the coordination of maximum deal size on GB websites. Design/methodology/approach: Considering the deal structure of GB websites especially for the service fee and minimum deal size limit required by GB websites, advertising effect of selling on GB websites, and interaction between online and offline markets, an analytical model is built to derive optimal online price and maximum deal size for sellers selling through GB website. This paper aims to answer four research questions: (1) How to make a decision on maximum deal size with coordination of the deal price? (2) Will selling on GB websites always be better than staying with offline channel only? (3) What kind of products is more appropriate to sell on GB website? (4)How could GB website operator induce sellers to offer deep discount in GB deals? Findings and Originality/value: This paper obtains optimal strategies for sellers selling on GB website and finds that: Even if a seller has sufficient capacity, he/she may still set a maximum deal size on the GB deal to take advantage of Advertisement with Limited Availability (ALA) effect; Selling through GB website may not bring a higher profit than selling only through offline channel when a GB site only has a small consumer base and/or if there is a big overlap between the online and offline markets; Low margin products are more suitable for being sold online with ALA strategies (LP-ALA or HP-ALA) than high margin ones; A GB site operator could set a small minimum deal size to induce deep discounts from the sellers selling through GB deals. Research limitations/implications: The present study assumed that the demand function is determinate and linear. It will be interesting to study how stochastic demand and a more general demand function affect the optimal strategies. Practical implications: This paper provides a very useful model framework and optimal strategies for sellers’ selling on GB website. It takes advantage of the analytical model to explain much typical practical phenomenon for E-commerce like free sale with limited availability and so forth. It also helps GB website operator to induce deep discount from sellers. Originality/value: This paper is a first attempt to examine the seller's GB sale decision problem regarding to price and bounds on deal sizes. It analyses how the minimum deal size set by the GB website affect the optimal decision of sellers’. Moreover, it also discusses the impact of the interactions between online and offline markets on sellers’ decisionPeer Reviewe

    Learning Gaussian Mixture Representations for Tensor Time Series Forecasting

    Full text link
    Tensor time series (TTS) data, a generalization of one-dimensional time series on a high-dimensional space, is ubiquitous in real-world scenarios, especially in monitoring systems involving multi-source spatio-temporal data (e.g., transportation demands and air pollutants). Compared to modeling time series or multivariate time series, which has received much attention and achieved tremendous progress in recent years, tensor time series has been paid less effort. Properly coping with the tensor time series is a much more challenging task, due to its high-dimensional and complex inner structure. In this paper, we develop a novel TTS forecasting framework, which seeks to individually model each heterogeneity component implied in the time, the location, and the source variables. We name this framework as GMRL, short for Gaussian Mixture Representation Learning. Experiment results on two real-world TTS datasets verify the superiority of our approach compared with the state-of-the-art baselines.Comment: 9 pages, 5 figures, published to IJCAI 202
    • …
    corecore