19 research outputs found

    Shape Constrained Nonparametric Estimation in the Cox Model

    No full text
    The events of interest in any survival analysis study are regularly subject to censoring. There are various censoring schemes, including right or left censoring, and interval censoring. The most frequent censoring scheme is the right censoring, where subjects might drop out of the study or simply because not all events of interest occur before the end of the study. Moreover, for each subject, additional information referred to as covariates is registered at the beginning or throughout the study, such as age, sex, undergoing treatment, etc. The classical model to study the distribution of the events of interest, while accounting for additional information, is the Cox model. The Cox model expresses the hazard function of a subject given a set of covariates in terms of a baseline hazard, for which all covariates are zero, and an exponential function of the covariates and corresponding regression parameters. The baseline hazard can be left completely unspecified while estimating the regression parameters. Nonetheless, in practice, there are numerous studies in which the baseline hazard appears to be monotone. Time to death or to the onset of a disease are observed to have a nondecreasing baseline hazard, while the survival or recovery time after a successful medical treatment usually exhibit a nonincreasing baseline hazard. The aim of this thesis is to study the behavior of nonparametric baseline hazard and baseline density estimators in the Cox model under monotonicity constraints. The event times are assumed to be right censored and the censoring mechanism is assumed to be independent of the event of interest and non-informative. The covariates are assumed to be time-independent, usually recorded at the beginning of the study. In addition to point estimates, interval estimates of a monotone baseline hazard will be provided, based on a likelihood ratio method, along with testing at a fixed point. Furthermore, kernel smoothed estimates of a monotone baseline hazard will be defined and their behavior will be investigated. In Chapter 2, we propose several nonparametric monotone estimators of a baseline hazard or a baseline density within the Cox model. We derive the nonparametric maximum likelihood estimator of a nondecreasing baseline hazard and we consider a Grenander-type estimator, defined as the left-hand slope of the greatest convex minorant of the Breslow estimator. The two estimators are then shown to be strongly consistent and asymptotically equivalent. Moreover, we derive their common limit distribution at a fixed point. The two equivalent estimators of a nonincreasing baseline hazard and their asymptotic properties are acquired similarly. Furthermore, we introduce a Grenander-type estimator of a nonincreasing baseline density, defined as the left-hand slope of the least concave majorant of an estimator of the baseline cumulative distribution function derived from the Breslow estimator. This estimator is proven to be strongly consistent and its asymptotic distribution at a fixed point is derived. Chapter 3 provides an asymptotic linear representation of the Breslow estimator of the baseline cumulative hazard function in the Cox model. This representation can be used to derive the asymptotic distribution of the Grenander type estimator of a monotone baseline hazard estimator. The representation consists of an average of independent random variables and a term involving the difference between the maximum partial likelihood estimator and the underlying regression parameter. The order of the remainder term is arbitrarily close to n^-1. Chapter 4 focuses on interval estimation and on testing whether a monotone baseline hazard function in the Cox model has a particular value at a fixed point, via a likelihood ratio method. Nonparametric maximum likelihood estimators under the null hypothesis are defined for both nondecreasing and nonincreasing baseline hazard functions. These characterizations, along with those of the monotone nonparametric maximum likelihood estimators provide the asymptotic distribution of the likelihood ratio test. This asymptotic distribution enables, via inversion, the construction of pointwise confidence intervals. This method of constructing confidence intervals avoids the issue of estimating the nuisance parameters, as in the case of confidence intervals based on the asymptotic distribution of the estimators. Simulations indicate that the two methods yield confidence intervals with comparable coverage probabilities. Nonetheless, the confidence intervals based on the likelihood ratio are smaller, on average. Finally, in chapter 5 we consider smooth baseline hazard estimators. The estimators are obtained by kernel smoothing the maximum likelihood and Grenander-type estimators of a monotone baseline hazard function. Three different estimators are proposed for a nondecreasing baseline hazard, which are provided by the interchange of the smoothing and isotonization step. With this respect, we define a smoothed maximum likelihood estimator (SMLE), as well as a smoothed Grenander type (SG) estimator and a Grenander type smoothed (GS) estimator. All estimators are shown to be strongly pointwise or uniformly consistent.Applied mathematicsElectrical Engineering, Mathematics and Computer Scienc

    Calibrating experts’ probabilistic assessments for improved probabilistic predictions

    No full text
    Expert judgement is routinely required to inform critically important decisions. While expert judgement can be remarkably useful when data are absent, it can be easily influenced by contextual biases which can lead to poor judgements and subsequently poor decisions. Structured elicitation protocols aim to: (1) guard against biases and provide better (aggregated) judgements, and (2) subject expert judgements to the same level of scrutiny as is expected for empirical data. The latter ensures that if judgements are to be used as data, they are subject to the scientific principles of review, critical appraisal, and repeatability. Objectively evaluating the quality of expert data and validating expert judgements are other essential elements. Considerable research suggests that the performance of experts should be evaluated by scoring experts on questions related to the elicitation questions, whose answers are known a priori. Experts who can provide accurate, well-calibrated and informative judgements should receive more weight in a final aggregation of judgements. This is referred to as performance-weighting in the mathematical aggregation of multiple judgements. The weights depend on the chosen measures of performance. We are yet to understand the best methods to aggregate judgements, how well such aggregations perform out of sample, or the costs involved, as well as the benefits of the various approaches. In this paper we propose and explore a new measure of experts’ calibration. A sizeable data set containing predictions for outcomes of geopolitical events is used to investigate the properties of this calibration measure when compared to other, well established measures.Accepted Author ManuscriptApplied Probabilit

    An In-Depth Perspective on the Classical Model

    No full text
    The Classical Model (CM) or Cooke’s method for performing Structured Expert Judgement (SEJ) is the best-known method that promotes expert performance evaluation when aggregating experts’ assessments of uncertain quantities. Assessing experts’ performance in quantifying uncertainty involves two scores in CM, the calibration score (or statistical accuracy) and the information score. The two scores combine into overall scores, which, in turn, yield weights for a performance-based aggregation of experts’ opinions. The method is fairly demanding, and therefore carrying out a SEJ elicitation with CM requires careful consideration. This chapter aims to address the methodological and practical aspects of CM into a comprehensive overview of the CM elicitation process. It complements the chapter “Elicitation in the Classical Model” in the book Elicitation (Quigley et al. 2018). Nonetheless, we regard this chapter as a stand-alone material, hence some concepts and definitions will be repeated, for the sake of completeness.Applied Probabilit

    Uncertainty Quantification with Experts: Present Status and Research Needs

    No full text
    Expert elicitation is deployed when data are absent or uninformative and critical decisions must be made. In designing an expert elicitation, most practitioners seek to achieve best practice while balancing practical constraints. The choices made influence the required time and effort investment, the quality of the elicited data, experts’ engagement, the defensibility of results, and the acceptability of resulting decisions. This piece outlines some of the common choices practitioners encounter when designing and conducting an elicitation. We discuss the evidence supporting these decisions and identify research gaps. This will hopefully allow practitioners to better navigate the literature, and will inspire the expert judgment research community to conduct well powered, replicable experiments that properly address the research gaps identified.Applied Probabilit

    Using the Classical Model for Source Attribution of Pathogen-Caused Illnesses: Lessons from Conducting an Ample Structured Expert Judgment Study

    No full text
    A recent ample Structured Expert Judgment (SEJ) study quantified the source attribution of 33 distinct pathogens in the United States. The source attribution for five transmission pathways: food, water, animal contact, person-to-person, and environment has been considered. This chapter will detail how SEJ has been applied to answer questions of interest by discussing the process used, strengths identified, and lessons learned from designing a large SEJ study. The focus will be on the undertaken steps that have prepared the expert elicitation.Applied Probabilit

    Shrinking the Variance in Experts’ “Classical” Weights Used in Expert Judgment Aggregation

    No full text
    Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and the informativeness of the elicited distributions. This is referred to as Cooke’s method, or the classical model (CM), for aggregating probabilistic expert judgments. The performance scores are derived from experiments, so they are uncertain and, therefore, can be represented by random variables. As a consequence, the experts’ weights are also random variables. We focus on addressing the underlying uncertainty when calculating experts’ weights to be used in a mathematical aggregation of expert elicited distributions. This paper investigates the potential of applying an empirical Bayes development of the James–Stein shrinkage estimation technique on the CM’s weights to derive shrinkage weights with reduced mean squared errors. We analyze 51 professional CM expert elicitation studies. We investigate the differences between the classical and the (new) shrinkage CM weights and the benefits of using the new weights. In theory, the outcome of a probabilistic model using the shrinkage weights should be better than that obtained when using the classical weights because shrinkage estimation techniques reduce the mean squared errors of estimators in general. In particular, the empirical Bayes shrinkage method used here reduces the assigned weights for those experts with larger variances in the corresponding sampling distributions of weights in the experiment. We measure improvement of the aggregated judgments in a cross-validation setting using two studies that can afford such an approach. Contrary to expectations, the results are inconclusive. However, in practice, we can use the proposed shrinkage weights to increase the reliability of derived weights when only small-sized experiments are available. We demonstrate the latter on 49 post-2006 professional CM expert elicitation studies.Applied Probabilit

    Bayesian networks for identifying incorrect probabilistic intuitions in a climate trend uncertainty quantification context

    No full text
    Probabilistic thinking can often be unintuitive. This is the case even for simple problems, let alone the more complex ones arising in climate modelling, where disparate information sources need to be combined. The physical models, the natural variability of systems, the measurement errors and their dependence upon the observational period length should be modelled together in order to understand the intricacies of the underlying processes. We use Bayesian networks (BNs) to connect all the above-mentioned pieces in a climate trend uncertainty quantification framework. Inference in such models allows us to observe some seemingly nonsensical outcomes. We argue that they must be pondered rather than discarded until we understand how they arise. We would like to stress that the main focus of this paper is the use of BNs in complex probabilistic settings rather than the application itself.Accepted Author ManuscriptApplied Probabilit

    Statistical models for improving significant wave height predictions in offshore operations

    No full text
    Installation and maintenance strategies regarding offshore wind farm operations involve extensive logistics. The main focus is the right temporal and spatial placement of personnel and equipment, while taking into account forecasted meteorological and ocean conditions. For these operations to be successful, weather windows characterized by certain permissive wave conditions are of enormous importance, whereas unforeseen events result in high cost and risk of safety. Numerical modelling of waves, water levels and current related variables has been used extensively to forecast ocean conditions. To account for the inherited model uncertainty, several error modelling techniques can be implemented for the numerical model forecasts to be corrected. In this study, various Bayesian Network (BN) models are incorporated, in order to enhance the accuracy of the significant wave height predictions and to be compared with other techniques, in conditions resembling the real-time nature of the application. The implemented BN models differ in terms of training and structure and provide overall the most satisfying performance. Supplementary, it is shown that the BN models illustrate significant advantages as both quantitative and conceptual tools, since they produce estimates for the underlying uncertainty of the phenomena, while providing information about the incorporated variables’ dependence relationships through their structure.Accepted author manuscriptApplied Probabilit

    Quantitative risk analysis of a hazardous jet fire event for hydrogen transport in natural gas transmission pipelines

    No full text
    With the advent of large-scale application of hydrogen, transportation becomes crucial. Reusing the existing natural gas transmission system could serve as catalyst for the future hydrogen economy. However, a risk analysis of hydrogen transmission in existing pipelines is essential for the deployment of the new energy carrier. This paper focuses on the individual risk (IR) associated with a hazardous hydrogen jet fire and compares it with the natural gas case. The risk analysis adopts a detailed flame model and state of the art computational software, to provide an enhanced physical description of flame characteristics. This analysis concludes that hydrogen jet fires yield lower lethality levels, that decrease faster with distance than natural gas jet fires. Consequently, for large pipelines, hydrogen transmission is accompanied by significant lower IR. Howbeit, ignition effects increasingly dominate the IR for decreasing pipeline diameters and cause hydrogen transmission to yield increased IR in the vicinity of the pipeline when compared to natural gas.Electrical Engineering EducationApplied ProbabilityEnergy Technolog
    corecore