248 research outputs found

    Metaโ€analysis for Surrogacy: Accelerated Failure Time Models and Semicompeting Risks Modeling

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90527/1/j.1541-0420.2011.01633.x.pd

    ์‹œ์‚ฌํšŒ ๊ทœ๋ชจ์™€ ์‹œ์ ์ด ์˜ํ™” ํฅํ–‰์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ฒฝ์˜๋Œ€ํ•™ ๊ฒฝ์˜ํ•™๊ณผ, 2018. 2. ๊น€์ƒํ›ˆ.This study develops and tests a model of both the initial and final performance of a movie, with a specific focus on the decision related to the film preview that distributors need to consider to maximize box office revenue. The aim of this study is to show that the current decisions made by distributors are suboptimal in most cases, and as a result, the authors suggest a more systematic approach. In line with previous studies, this research indicates that inviting a large audience to the preview improves the final box office performance of the movie. In addition, the results suggest that there is a quadratic relationship between the time lag from the initial preview to the opening and the final audience numbers during high season. Specifically, the model reveals that the optimal average time lag in high season is 36.5 days, compared with the actual average of 15.4 days. This model can improve managerial decision making by providing the ability to estimate the optimal time lag for specific movies to maximize movie sales.Chapter 1. Introduction 1 Chapter 2. Relevant Literature 4 Chapter 3. Conceptual Framework and Hypotheses 8 3.1. Preview Scale and Box Office Performance 9 3.2. Preview Timing and Box Office Performance 10 3.3. Dependence between Initial and Final Performance 13 Chapter 4. Data and Measurements 14 Chapter 5. Model 20 5.1. Modeling Dependence through Copulas 21 5.2. Modeling the Performance of Movies 24 Chapter 6. Results 27 6.1. Copula Selection and Hypotheses Testing 27 6.2. Determining Optimal Preview Timing Policies 33 Chapter 7. Discussion 41 References 46 Abstract in Korean 55Docto

    Statistical Methods with a Focus on Joint Outcome Modeling and on Methods for Fire Science

    Get PDF
    Understanding the dynamics of wildfires contributes significantly to the development of fire science. Challenges in the analysis of historical fire data include defining fire dynamics within existing statistical frameworks, modeling the duration and size of fires as joint outcomes, identifying the how fires are grouped into clusters of subpopulations, and assessing the effect of environmental variables in different modeling frameworks. We develop novel statistical methods to consider outcomes related to fire science jointly. These methods address these challenges by linking univariate models for separate outcomes through shared random effects, an approach referred to as joint modeling. Comparisons with existing approaches demonstrate the flexibilities of the joint models developed and the advantages of their interpretations. Models used to quantify fire behaviour may also be useful in other applications, and here we consider modeling disease spread. The methodologies for wildfire modeling can be used, for example, for understanding the progression of Covid-19 in Ontario, Canada. The key contributions presented in this thesis are the following: 1) Developing frameworks for modelling fire duration and fire size in British Columbia, Canada, jointly, both through modelling using shared random effects and also through copulas. 2) Illustrating the robustness of joint models when the true models are copulas. 3) Extending the framework into a finite joint mixture to classify fires into components and to identify the subpopulation to which the fires belong. 4) Incorporating the longitudinal environmental variables into the models. 5) Extending the method into the analysis of public health data by linking the daily number of Covid-19 hospitalizations and deaths as time series processes using a shared random effect. A key aspect of the research presented here is the focus on extensions of the joint modeling framework

    Improving PLS-SEM use for business marketing research

    Get PDF
    A review of studies published in Industrial Marketing Management over the past two decades and more shows that these studies not only used partial least squares structural equation modeling (PLS-SEM) widely to estimate and empirically substantiate theoretically established models with constructs, but did so increasingly. In line with their study goals, researchers provided reasons for using PLS-SEM (e.g., model complexity, limited sample size, and prediction). These reasons are frequently not fully convincing, requiring further clarification. Additionally, our review reveals that researchers' assessment and reporting of their measurement and structural models are insufficient. Certain tests and thresholds that they use are also inappropriate. Finally, researchers seldom apply more advanced PLS-SEM analytic techniques, although these can support the results' robustness and may create new insights. This paper addresses the issues by reviewing business marketing studies to clarify PLS-SEM's appropriate use. Furthermore, the paper provides researchers and practitioners in the business marketing field with a best practice orientation and describes new opportunities for using PLS-SEM. To this end, the paper offers guidelines and checklists to support future PLS-SEM applications

    Parametric G-computation for Compatible Indirect Treatment Comparisons with Limited Individual Patient Data

    Get PDF
    Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap and cannot extrapolate beyond the observed covariate space. Current outcome regression-based alternatives can extrapolate but target a conditional treatment effect that is incompatible in the indirect comparison. When adjusting for covariates, one must integrate or average the conditional estimate over the relevant population to recover a compatible marginal treatment effect. We propose a marginalization method based parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. The approach views the covariate adjustment regression as a nuisance model and separates its estimation from the evaluation of the marginal treatment effect of interest. The method can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework. A simulation study provides proof-of-principle and benchmarks the method's performance against MAIC and the conventional outcome regression. Parametric G-computation achieves more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yields unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, the marginalized regression-adjusted estimates provide greater precision and accuracy than the conditional estimates produced by the conventional outcome regression, which are systematically biased because the measure of effect is non-collapsible. This article is protected by copyright. All rights reserved

    Essays on well-being: a UK analysis

    Get PDF
    • โ€ฆ
    corecore