2,376 research outputs found
Theory of general balance applied to step wedge designs.
A standard idealized step-wedge design satisfies the requirements, in terms of the structure of the observation units, to be considered a balanced design and can be labeled as a criss-cross design (time crossed with cluster) with replication. As such, Nelder's theory of general balance can be used to decompose the analysis of variance into independent strata (grand mean, cluster, time, cluster:time, residuals). If time is considered as a fixed effect, then the treatment effect of interest is estimated solely within the cluster and time:cluster strata; the time effects are estimated solely within the time stratum. This separation leads directly to scalar, rather than matrix, algebraic manipulations to provide closed-form expressions for standard errors of the treatment effect estimate. We use the tools provided by the theory of general balance to obtain an expression for the standard error of the estimated treatment effect in a general case where the assumed covariance structure includes random-effects at the time and time:cluster levels. This provides insights that are helpful for experimental design regarding the assumed correlation within clusters over time, sample size in terms of numbers of clusters and replication within cluster, and components of the standard error for estimated treatment effect
CHaPD: a response to the flooding in Morpeth
The presentation looks at the initial response to a flooding incident from the point-of-view of a CHaPD unit who had to risk assess the potential implications from chemical loss to the general environment and the potential impact on public health. The Morpeth flooding event of September 2008 tested the risk assessment methodology piloted in the South Yorkshire floods and highlighted where some critical information for the risk assessment was missing and attention needed
Aspects of competing risks survival analysis
This thesis is focused on the topic of competing risks survival analysis. The first
chapter provides an introduction and motivation with a brief literature review. Chapter 2
considers the fundamental functional of all competing risks data: the crude incidence
function. This function is considered in the light of the counting process framework
which provides powerful mathematics to calculate confidence bands in an analytical
form, rather than bootstrapping or simulation.
Chapter 3 takes the Peterson bounds and considers what happens in the event
of covariate information. Fortunately, these bounds do become tighter in some cases.
Chapter 4 considers what can be inferred about the effect of covariates in the case
of competing risks. The conclusion is that there exist bounds on any covariate-time
transformation. These two preceding chapters are illustrated with a data set in chapter 5.
Chapter 6 considers the result of Heckman and Honore (1989) and investigates
the question of their generalisation. It reaches the conclusion that the simple assumption
of a univariate covariate-time transformation is not enough to provide identifiability.
More practical questions of modeling dependent competing risks data through
the use of frailty models to induce dependence is considered in chapter 7. A practical
and implementable model is illustrated.
A diversion is taken into more abstract probability theory in chapter 8 which
considers the Bayesian non-parametric tool: P61ya trees. The novel framework of this
tool is explained and some results are obtained concerning the limiting random density
function and the issues which arise when trying to integrate with a realised P61ya
distribution as the integrating measure.
Chapter 9 applies the theory of chapters 7 and 8 to a competing risks data set
of a prostate cancer clinical trial. This has several continuous baseline covariates and
gives the opportunity to use a frailty model discussed in chapter 7 where the unknown
frailty distribution is modeled using a P61ya tree which is considered in chapter 8.
An overview of the thesis is provided in chapter 10 and directions for future
research are considered here
rpsftm: An R Package for Rank Preserving Structural Failure Time Models.
Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ, is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z(ψ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm
The Role of Glyceraldehyde-3-Phosphate Dehydrogenase Acetylation in liver metabolism
Post translational modifications to metabolic enzymes regulate metabolis
Model selection and sample size adaptation: HYPAZ trial
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Aspects of competing risks survival analysis
This thesis is focused on the topic of competing risks survival analysis. The first chapter provides an introduction and motivation with a brief literature review. Chapter 2 considers the fundamental functional of all competing risks data: the crude incidence function. This function is considered in the light of the counting process framework which provides powerful mathematics to calculate confidence bands in an analytical form, rather than bootstrapping or simulation. Chapter 3 takes the Peterson bounds and considers what happens in the event of covariate information. Fortunately, these bounds do become tighter in some cases. Chapter 4 considers what can be inferred about the effect of covariates in the case of competing risks. The conclusion is that there exist bounds on any covariate-time transformation. These two preceding chapters are illustrated with a data set in chapter 5. Chapter 6 considers the result of Heckman and Honore (1989) and investigates the question of their generalisation. It reaches the conclusion that the simple assumption of a univariate covariate-time transformation is not enough to provide identifiability. More practical questions of modeling dependent competing risks data through the use of frailty models to induce dependence is considered in chapter 7. A practical and implementable model is illustrated. A diversion is taken into more abstract probability theory in chapter 8 which considers the Bayesian non-parametric tool: P61ya trees. The novel framework of this tool is explained and some results are obtained concerning the limiting random density function and the issues which arise when trying to integrate with a realised P61ya distribution as the integrating measure. Chapter 9 applies the theory of chapters 7 and 8 to a competing risks data set of a prostate cancer clinical trial. This has several continuous baseline covariates and gives the opportunity to use a frailty model discussed in chapter 7 where the unknown frailty distribution is modeled using a P61ya tree which is considered in chapter 8. An overview of the thesis is provided in chapter 10 and directions for future research are considered here.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council : Knowle Hill School FundGBUnited Kingdo
The Voice of the People: Reasons Laypersons Support K--12 Art Education.
In the midst of generating theory for classrooms, philosophizing on the meaning and relevance of art education, determining policy and practices for classrooms, even providing advocacy literature in favor of art education, few references have been made to laypersons\u27 ideas about art education. Lest the layperson (non-art education professional)---the human element [Pinar, Reynolds, Slattery, & Taubman, 1995, p. 102)---be overlooked, disregarded, and/or marginalized from considerations about art education, their ideas must be solicited, honored, and respected. Since the mission of art education is to educate future laypersons and not (necessarily) future artists, members of the profession as well as policymakers for education should be informed of what laypersons are thinking about art in schools. The uniqueness of this study as well as its purpose was the solicitation of laypersons\u27 ideas about art education in the everyday, informal language of the layperson. In addition to considerations of literature, histories, and other related sources, this study consisted of randomly surveying 337 persons in six cities throughout the United States. Of the respondents to the questionnaire/survey, 88% indicated that they support art education as a required subject in the K--12 public school curriculum. In the open-ended sections of the questionnaire/survey, these laypersons communicated highly sophisticated, post-modern ideas which included not only a strong sense of self in society (Dewey, 1934), but support for art education (a) for self-expression and social expression and (b) as a vehicle for understanding and honoring the societies and cultures of self and others
- …