169 research outputs found

    Higher Dimensional Lattice Chains and Delannoy Numbers

    Get PDF
    Fix nonnegative integers n1 , . . ., nd, and let L denote the lattice of points (a1 , . . ., ad) ∈ ℤd that satisfy 0 ≤ ai ≤ ni for 1 ≤ i ≤ d. Let L be partially ordered by the usual dominance ordering. In this paper we use elementary combinatorial arguments to derive new expressions for the number of chains and the number of Delannoy paths in L. Setting ni = n (for all i) in these expressions yields a new proof of a recent result of Duichi and Sulanke [9] relating the total number of chains to the central Delannoy numbers. We also give a novel derivation of the generating functions for these numbers in arbitrary dimension

    Type I error control in biomarker-stratified clinical trials

    Get PDF
    Biomarker-stratified clinical trials assess the biomarker signature of subjects and split them into subgroups so that treatment is of benefit to those who are likely to respond. Since multiple hypotheses are tested, it becomes important to control the type I error. Current methods control the false positive rate where one rejects the null hypothesis while in reality that was true. For two subgroups, the false positive rate is controlled across the two hypotheses as a Family Wise Error Rate (FWER) to an overall predetermined significance level

    Adaptive enrichment in biomarker-stratified clinical trial design

    Get PDF
    In Phase II oncology trials, targeted therapies are being constantly evaluated for their efficacy in specific populations of interest. Such trials require designs that allow for stratification based on the participants' biomarker signature. One of the disadvantages of a targeted design (defined as enrichment in biomarker-positive sub-population) is that if the drug has at least some activity in the biomarker-negative subjects, then their effect in the biomarker-negative population may never be known. Jones and Holmgren (JH) have proposed a design to determine whether drug has activity only in target population or the general population. Their design is an enrichment adaptation based on two parallel Simon two-stage designs. Unfortunately, there are several pitfalls in the JH design: the issue of hypothesis testing is not properly addressed and the type I error, power calculations and expected sample size formulae are wrong too. We study the JH design in detail, appropriately control the type I and type II error probabilities that yield novel optimal designs. We also discuss various alternative Family Wise Error Rates (FWER) and the Individual Hypothesis (IH) error rates in the weak sense as well as the strong sense. For each option of the error controls, we search for designs over a 10 trillion search space and obtain optimal designs that minimise the expected sample size. For the particular example trial that JH consider, our optimal design requires 38% fewer subjects in comparison with the two parallel Simon two-stage design thereby offering substantial efficiency in terms of the expected sample size. In conclusion, our rectified design provides a robust framework for adaptive enrichment in biomarker-stratified Phase II trial design

    Counting lattice chains and Delannoy paths in higher dimensions

    Get PDF
    AbstractLattice chains and Delannoy paths represent two different ways to progress through a lattice. We use elementary combinatorial arguments to derive new expressions for the number of chains and the number of Delannoy paths in a lattice of arbitrary finite dimension. Specifically, fix nonnegative integers n1,…,nd, and let L denote the lattice of points (a1,…,ad)∈Zd that satisfy 0≤ai≤ni for 1≤i≤d. We prove that the number of chains in L is given by 2nd+1∑k=1kmax′∑i=1k(−1)i+kk−1i−1nd+k−1nd∏j=1d−1nj+i−1nj, where kmax′=n1+⋯+nd−1+1. We also show that the number of Delannoy paths in L equals ∑k=1kmax′∑i=1k(−1)i+k(k−1i−1)(nd+k−1nd)∏j=1d−1(nd+i−1nj). Setting ni=n (for all i) in these expressions yields a new proof of a recent result of Duchi and Sulanke [9] relating the total number of chains to the central Delannoy numbers. We also give a novel derivation of the generating functions for these numbers in arbitrary dimension

    MDI-GPU: accelerating integrative modelling for genomic-scale data using GP-GPU computing.

    Get PDF
    The integration of multi-dimensional datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct--but often complementary--information. However, the large amount of data adds burden to any inference task. Flexible Bayesian methods may reduce the necessity for strong modelling assumptions, but can also increase the computational burden. We present an improved implementation of a Bayesian correlated clustering algorithm, that permits integrated clustering to be routinely performed across multiple datasets, each with tens of thousands of items. By exploiting GPU based computation, we are able to improve runtime performance of the algorithm by almost four orders of magnitude. This permits analysis across genomic-scale data sets, greatly expanding the range of applications over those originally possible. MDI is available here: http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/

    Area, perimeter, height, and width of rectangle visibility graphs

    Get PDF
    A rectangle visibility graph (RVG) is represented by assigning to each vertex a rectangle in the plane with horizontal and vertical sides in such a way that edges in the graph correspond to unobstructed horizontal and vertical lines of sight between their corresponding rectangles. To discretize, we consider only rectangles whose corners have integer coordinates. For any given RVG, we seek a representation with smallest bounding box as measured by its area, perimeter, height, or width (height is assumed not to exceed width)

    An optimal stratified Simon two-stage design

    Get PDF
    In Phase II oncology trials, therapies are increasingly being evaluated for their effectiveness in specific populations of interest. Such targeted trials require designs that allow for stratification based on the participants’ molecular characterisation. A targeted design proposed by Jones and Holmgren (JH) Jones CL, Holmgren E: ‘An adaptive Simon two-stage design for phase 2 studies of targeted therapies’, Contemporary Clinical Trials 28 (2007) 654-661.determines whether a drug only has activity in a disease sub-population or in the wider disease population. Their adaptive design uses results from a single interim analysis to decide whether to enrich the study population with a subgroup or not; it is based on two parallel Simon two-stage designs. We study the JH design in detail and extend it by providing a few alternative ways to control the familywise error rate, in the weak sense as well as the strong sense. We also introduce a novel optimal design by minimising the expected sample size. Our extended design contributes to the much needed framework for conducting Phase II trials in stratified medicine

    A 1D microphysical cloud model for Earth, and Earth-like exoplanets. Liquid water and water ice clouds in the convective troposphere

    Full text link
    One significant difference between the atmospheres of stars and exoplanets is the presence of condensed particles (clouds or hazes) in the atmosphere of the latter. The main goal of this paper is to develop a self-consistent microphysical cloud model for 1D atmospheric codes, which can reproduce some observed properties of Earth, such as the average albedo, surface temperature, and global energy budget. The cloud model is designed to be computationally efficient, simple to implement, and applicable for a wide range of atmospheric parameters for planets in the habitable zone. We use a 1D, cloud-free, radiative-convective, and photochemical equilibrium code originally developed by Kasting, Pavlov, Segura, and collaborators as basis for our cloudy atmosphere model. The cloud model is based on models used by the meteorology community for Earth's clouds. The free parameters of the model are the relative humidity and number density of condensation nuclei, and the precipitation efficiency. In a 1D model, the cloud coverage cannot be self-consistently determined, thus we treat it as a free parameter. We apply this model to Earth (aerosol number density 100 cm^-3, relative humidity 77 %, liquid cloud fraction 40 %, and ice cloud fraction 25 %) and find that a precipitation efficiency of 0.8 is needed to reproduce the albedo, average surface temperature and global energy budget of Earth. We perform simulations to determine how the albedo and the climate of a planet is influenced by the free parameters of the cloud model. We find that the planetary climate is most sensitive to changes in the liquid water cloud fraction and precipitation efficiency. The advantage of our cloud model is that the cloud height and the droplet sizes are self-consistently calculated, both of which influence the climate and albedo of exoplanets.Comment: To appear in Icaru

    Type I error control in biomarker-stratified clinical trials

    Get PDF
    corecore