53 research outputs found

    Biplot and Singular Value Decomposition Macros for Excel©

    Get PDF
    The biplot display is a graph of row and column markers obtained from data that forms a two-way table. The markers are calculated from the singular value decomposition of the data matrix. The biplot display may be used with many multivariate methods to display relationships between variables and objects. It is commonly used in ecological applications to plot relationships between species and sites. This paper describes a set of ExcelĂƒĂ‚Â© macros that may be used to draw a biplot display based on results from principal components analysis, correspondence analysis, canonical discriminant analysis, metric multidimensional scaling, redundancy analysis, canonical correlation analysis or canonical correspondence analysis. The macros allow for a variety of transformations of the data prior to the singular value decomposition and scaling of the markers following the decomposition.

    INFRISK : a computer simulation approach to risk management in infrastructure project finance transactions

    Get PDF
    Few issues in modern finance have inspired the interest of both practitioners and theoreticians more than risk evaluation and management. The basic principle governing risk management in an infrastructure project finance deal is intuitive and well-articulated: allocate project-specific risks to parties best able to bear them (taking into account each party's appetite for, and aversion to, risk); control performance risk through incentives; and use market hedging instruments (derivatives) for covering marketwide risks arising from fluctuations in, for instance, interest and exchange rates, among other things. In practice, however, governments have been asked to provide guarantees for various kinds of projects, often at no charge, because of problems associated with market imperfections: a) Derivative markets (swaps, forwards) for currency and interest-rate risk hedging either do not exist or are inadequately developed in most developing countries. b) Limited contracting possibilities (because of problems with credibility of enforcement). c) Differing methods for risk measurement and evaluation. Two factors distinguish the financing of infrastructure projects from corporate and traditional limited-recourse project finance: 1) a high concentration of project risk early in the project life cycle (pre-completion), and 2) a risk profile that changes as the project comes to fruition, with a relatively stable cash flow subject to market and regulatory risk once the project is completed. The authors introduce INFRISK, a computer-based risk-management approach to infrastructure project transactions that involve the private sector. Developed in-house in the Economic Development Institute of the World Bank, INFRISK is a guide to practitioners in the field and a training tool for raising awareness and improving expertise in the application of modern risk management techniques. INFRISK can analyze a project's exposure to a variety of market, credit, and performance risks form the perspective of key contracting parties (project promoter, creditor, and government). Their model is driven by the concept of the project's economic viability. Drawing on recent developments in the literature on project evaluation under uncertainty, INFRISK generates probability distributions for key decision variables, such as a project's net present value, internal rate of return, or capacity to service its debt on time during the life of the project. Computationally, INFRISK works in conjunction with Microsoft Excel and supports both the construction and the operation phases of a capital investment project. For a particular risk variable of interest (such as the revenue stream, operations and maintenance costs, and construction costs, among others) the program first generates a stream of probability of distributions for each year of a project's life through a Monte Carlo simulation technique. One of the key contributions made by INFRISK is to enable the use of a broader set of probability distributions (uniform, normal, beta, and lognormal) in conducting Monte Carlo simulations rather than relying only on the commonly used normal distribution. A user's guide provides instruction on the use of the package.Banks&Banking Reform,Economic Theory&Research,Environmental Economics&Policies,Payment Systems&Infrastructure,Public Sector Economics&Finance,Financial Intermediation,Banks&Banking Reform,Environmental Economics&Policies,Economic Theory&Research,Public Sector Economics&Finance

    Biplot and Singular Value Decomposition Macros for Excel©

    Get PDF
    The biplot display is a graph of row and column markers obtained from data that forms a twoway table. The markers are calculated from the singular value decomposition of the data matrix. The biplot display may be used with many multivariate methods to display relationships between variables and objects. It is commonly used in ecological applications to plot relationships between species and sites. This paper describes a set of Excel© macros that may be used to draw a biplot display based on results from principal components analysis, correspondence analysis, canonical discriminant analysis, metric multidimensional scaling, redundancy analysis, canonical correlation analysis or canonical correspondence analysis. The macros allow for a variety of transformations of the data prior to the singular value decomposition and scaling of the markers following the decomposition

    Assessing the commonly used assumptions in estimating the principal causal effect in clinical trials

    Full text link
    In addition to the average treatment effect (ATE) for all randomized patients, sometimes it is important to understand the ATE for a principal stratum, a subset of patients defined by one or more post-baseline variables. For example, what is the ATE for those patients who could be compliant with the experimental treatment? Commonly used assumptions include monotonicity, principal ignorability, and cross-world assumptions of principal ignorability and principal strata independence. Most of these assumptions cannot be evaluated in clinical trials with parallel treatment arms. In this article, we evaluate these assumptions through a 2x2 cross-over study in which the potential outcomes under both treatments can be observed, provided there are no carry-over and study period effects. From this example, it seemed the monotonicity assumption and the within-treatment principal ignorability assumptions did not hold well. On the other hand, the assumptions of cross-world principal ignorability and cross-world principal stratum independence conditional on baseline covaraites seemed to hold well. With the latter assumptions, we estimated the ATE for principal strata, defined by whether the blood glucose standard deviation increased in each treatment period, without relying on the cross-over feature. These estimates were very close to the ATE estimate when exploiting the cross-over feature of the trial. To the best of our knowledge, this article is the first attempt to evaluate the plausibility of commonly used assumptions for estimating ATE for principal strata using the setting of a cross-over trial.Comment: 25 pages, 4 table

    Modern approaches for evaluating treatment effect heterogeneity from clinical trials and observational data

    Full text link
    In this paper we review recent advances in statistical methods for the evaluation of the heterogeneity of treatment effects (HTE), including subgroup identification and estimation of individualized treatment regimens, from randomized clinical trials and observational studies. We identify several types of approaches using the features introduced in Lipkovich, Dmitrienko and D'Agostino (2017) that distinguish the recommended principled methods from basic methods for HTE evaluation that typically rely on rules of thumb and general guidelines (the methods are often referred to as common practices). We discuss the advantages and disadvantages of various principled methods as well as common measures for evaluating their performance. We use simulated data and a case study based on a historical clinical trial to illustrate several new approaches to HTE evaluation

    A multiple-imputation-based approach to sensitivity analyses and effectiveness assessments in longitudinal clinical trials.

    No full text
    It is important to understand the effects of a drug as actually taken (effectiveness) and when taken as directed (efficacy). The primary objective of this investigation was to assess the statistical performance of a method referred to as placebo multiple imputation (pMI) as an estimator of effectiveness and as a worst reasonable case sensitivity analysis in assessing efficacy. The pMI method assumes the statistical behavior of placebo- and drug-treated patients after dropout is the statistical behavior of placebo-treated patients. Thus, in the effectiveness context, pMI assumes no pharmacological benefit of the drug after dropout. In the efficacy context, pMI is a specific form of a missing not at random analysis expected to yield a conservative estimate of efficacy. In a simulation study with 18 scenarios, the pMI approach generally provided unbiased estimates of effectiveness and conservative estimates of efficacy. However, the confidence interval coverage was consistently greater than the nominal coverage rate. In contrast, last and baseline observation carried forward (LOCF and BOCF) were conservative in some scenarios and anti-conservative in others with respect to efficacy and effectiveness. As expected, direct likelihood (DL) and standard multiple imputation (MI) yielded unbiased estimates of efficacy and tended to overestimate effectiveness in those scenarios where a drug effect existed. However, in scenarios with no drug effect, and therefore where the true values for both efficacy and effectiveness were zero, DL and MI yielded unbiased estimates of efficacy and effectiveness

    Early evaluation of patient risk for substantial weight gain during olanzapine treatment for schizophrenia, schizophreniform, or schizoaffective disorder

    Get PDF
    BACKGROUND: To make well informed treatment decisions for their patients, clinicians need credible information about potential risk for substantial weight gain. We therefore conducted a post-hoc analysis of clinical trial data, examining early weight gain as a predictor of later substantial weight gain. METHODS: Data from 669 (Study 1) and 102 (Study 2) olanzapine-treated patients diagnosed with schizophrenia, schizophreniform, or schizoaffective disorder were analyzed to identify and validate weight gain cut-offs at Weeks 1–4 that were predictive of substantial weight gain (defined as an increase of ≄ 5, 7, 10 kg or 7% of baseline weight) after approximately 30 weeks of treatment. Baseline characteristics alone, baseline characteristics plus weight change from baseline to Weeks 1, 2, 3 or 4, and weight change from baseline to Weeks 1, 2, 3, or 4 alone were evaluated as predictors of substantial weight gain. Similar analyses were performed to determine BMI increase cut-offs at Weeks 1–4 of treatment that were predictive of substantial increase in BMI (1, 2 or 3 kg/m(2 )increase from baseline). RESULTS: At Weeks 1 and 2, predictions based on early weight gain plus baseline characteristics were more robust than those based on early weight gain alone. However, by Weeks 3 and 4, there was little difference between the operating characteristics associated with these two sets of predictors. The positive predictive values ranged from 30.1% to 73.5%, while the negative predictive values ranged from 58.1% to 89.0%. Predictions based on early BMI increase plus baseline characteristics were not uniformly more robust at any time compared to those based on early BMI increase alone. The positive predictive values ranged from 38.3% to 83.5%, while negative predictive values ranged from 42.1% to 84.7%. For analyses of both early weight gain and early BMI increase, results for the validation dataset were similar to those observed in the primary dataset. CONCLUSION: Results from these analyses can be used by clinicians to evaluate risk of substantial weight gain or BMI increase for individual patients. For instance, negative predictive values based on data from these studies suggest approximately 88% of patients who gain less than 2 kg by Week 3 will gain less than 10 kg after 26–34 weeks of olanzapine treatment. Analysis of changes in BMI suggests that approximately 84% of patients who gain less than .64 kg/m(2 )in BMI by Week 3 will gain less than 3 kg/m(2 )in BMI after 26–34 weeks of olanzapine treatment. Further research in larger patient populations for longer periods is necessary to confirm these results

    Using principal stratification in analysis of clinical trials

    Full text link
    The ICH E9(R1) addendum (2019) proposed principal stratification (PS) as one of five strategies for dealing with intercurrent events. Therefore, understanding the strengths, limitations, and assumptions of PS is important for the broad community of clinical trialists. Many approaches have been developed under the general framework of PS in different areas of research, including experimental and observational studies. These diverse applications have utilized a diverse set of tools and assumptions. Thus, need exists to present these approaches in a unifying manner. The goal of this tutorial is threefold. First, we provide a coherent and unifying description of PS. Second, we emphasize that estimation of effects within PS relies on strong assumptions and we thoroughly examine the consequences of these assumptions to understand in which situations certain assumptions are reasonable. Finally, we provide an overview of a variety of key methods for PS analysis and use a real clinical trial example to illustrate them. Examples of code for implementation of some of these approaches are given in supplemental materials

    Integrating Randomized Placebo-Controlled Trial Data with External Controls: A Semiparametric Approach with Selective Borrowing

    Full text link
    In recent years, real-world external controls (ECs) have grown in popularity as a tool to empower randomized placebo-controlled trials (RPCTs), particularly in rare diseases or cases where balanced randomization is unethical or impractical. However, as ECs are not always comparable to the RPCTs, direct borrowing ECs without scrutiny may heavily bias the treatment effect estimator. Our paper proposes a data-adaptive integrative framework capable of preventing unknown biases of ECs. The adaptive nature is achieved by dynamically sorting out a set of comparable ECs via bias penalization. Our proposed method can simultaneously achieve (a) the semiparametric efficiency bound when the ECs are comparable and (b) selective borrowing that mitigates the impact of the existence of incomparable ECs. Furthermore, we establish statistical guarantees, including consistency, asymptotic distribution, and inference, providing type-I error control and good power. Extensive simulations and two real-data applications show that the proposed method leads to improved performance over the RPCT-only estimator across various bias-generating scenarios

    Typology of patients with fibromyalgia: cluster analysis of duloxetine study patients

    Get PDF
    Background: To identify distinct groups of patients with fibromyalgia (FM) with respect to multiple outcome measures. Methods: Data from 631 duloxetine-treated women in 4 randomized, placebo-controlled trials were included in a cluster analysis based on outcomes after up to 12 weeks of treatment. Corresponding classification rules were constructed using a classification tree method. Probabilities for transitioning from baseline to Week 12 category were estimated for placebo and duloxetine patients (Ntotal = 1188) using logistic regression. Results: Five clusters were identified, from “worst” (high pain levels and severe mental/physical impairment) to “best” (low pain levels and nearly normal mental/physical function). For patients with moderate overall severity, mental and physical symptoms were less correlated, resulting in 2 distinct clusters based on these 2 symptom domains. Three key variables with threshold values were identified for classification of patients: Brief Pain Inventory (BPI) pain interference overall scores of 80% of patients were in the 3 worst categories. Duloxetine patients were significantly more likely to improve after 12 weeks than placebo patients. A sustained effect was seen with continued duloxetine treatment. Conclusions: FM patients are heterogeneous and can be classified into distinct subgroups by simple descriptive rules derived from only 3 variables, which may guide individual patient management. Duloxetine showed higher improvement rates than placebo and had a sustained effect beyond 12 weeks
    • 

    corecore