105 research outputs found

    Optimal randomized and non-randomized procedures for multinomial selection problems

    Get PDF
    Multinomial selection problem procedures are ranking and selection techniques that aim to select the best (most probable) alternative based upon a sequence of multinomial observations. The classical formulation of the procedure design problem is to find a decision rule for terminating sampling. The decision rule should minimize the expected number of observations taken while achieving a specified indifference zone requirement on the prior probability of making a correct selection when the alternative configurations are in a particular subset of the probability space called the preference zone. We study the constrained version of the design problem in which there is a given maximum number of allowed observations. Numerous procedures have been proposed over the past 50 years, all of them suboptimal. In this thesis, we find via linear programming the optimal selection procedure for any given probability configuration. The optimal procedure turns out to be necessarily randomized in many cases. We also find via mixed integer programming the optimal non-randomized procedure. We demonstrate the performance of the methodology on a number of examples. We then reformulate the mathematical programs to make them more efficient to implement, thereby significantly expanding the range of computationally feasible problems. We prove that there exists an optimal policy which has at most one randomized decision point and we develop a procedure for finding such a policy. We also extend our formulation to replicate existing procedures. Next, we show that there is very little difference between the relative performances of the optimal randomized and non-randomized procedures. Additionally, we compare existing procedures using the optimal procedure as a benchmark, and produce updated tables for a number of those procedures. Then, we develop a methodology that guarantees the optimal randomized and non-randomized procedures for a broad class of variable observation cost functions -- the first of its kind. We examine procedure performance under a variety of cost functions, demonstrating that incorrect assumptions regarding marginal observation costs may lead to increased total costs. Finally, we investigate and challenge key assumptions concerning the indifference zone parameter and the conditional probability of correct selection, revealing some interesting implications.PhDCommittee Co-Chair: Goldsman, David; Committee Co-Chair: Tovey, Craig; Committee Member: Alexopoulos, Christos; Committee Member: Kleywegt, Anton; Committee Member: Sanchez, Susa

    A study of sequential inference for the risk ratio and measure of reduction of two binomials

    Full text link
    The binomial distribution is one of the most commonly and widely occurring probabilistic phenomena in our lives. Since observations from independent Bernoulli trials yield a dichotomous type, the distribution of sequences provides the basis and clue for statistical formulations of a wide variety of problems. Occasionally, the core of biomedical studies is related to the comparison and evaluation of the risks of events or outcomes of interest in comparing populations under study. For instance, one wishes to compare two groups of subjects drawn from two independent populations. Then, two sample proportions play central roles in those comparisons. One of the most useful ways to make comparisons for the relative risk is to take a ratio, also referred to as the risk ratio. In addition, a measure of reduction of the two proportions is considered. In this thesis, we consider sequential methods of inferences for the ratio of two independent binomial probabilities, the risk ratio, in two populations for comparison. We obtain approximate confidence intervals and optimal sample sizes for the risk ratio and measure of reduction, respectively. Since there does not exist an unbiased estimator of the risk ratio, the procedure is developed based on a slightly modified maximum likelihood estimator. Then, we explore properties of the proposed estimator using the standard criteria, such as unbiasedness, asymptotic variance, and the normality. For further investigation, we study the first-order asymptotic expansions and large sample properties using the asymptotic results. Then, the finite sample behavior will be examined through numerical studies. Monte Carlo experiment is performed for the various scenarios of parameters of two populations. Through illustrations, we compare the performance of the proposed methods, which is Wald-based confidence intervals, with the likelihood-ratio confidence intervals in light of length, sample sizes, and invariance. Then, we extend the proposed sequential procedure to two-stage sampling design, which has a pilot sampling stage and a stage of gathering all remaining observations if needed. The two-stage procedure is naturally a little more versatile and practical than pure sequential in terms of sample size and stopping time in many situations. Again, through numerical studies, we study the advantages and usefulness of the two- stage method as well. Consequently, by providing more comprehensive study of dynamic sampling plans for studying the risk ratio, we hope to contribute various inferential methods to the risk ratio and related problems

    A Sequential Perspective on Searching for Static Targets

    Get PDF
    a b s t r a c t We present a sequential approach to detect static targets with imperfect sensors, which range from tower-mounted cameras to satellites. The scenario is operationally relevant to many military, homeland security, search and rescue, environmental engineering, counter-narcotics, and law enforcement applications. The idea is to stop the search as soon as there is enough probabilistic evidence about the targets' locations, given an operator-prescribed error tolerance, knowledge of the sensors' parameters, and a sequence of detection signals from the sensors. By stopping the search as soon as possible, we promote efficiency by freeing up sensors and operators to perform other tasks. The model we develop has the added benefits of decreasing operator workload and providing negative information as a search progresses. Published by Elsevier B.V

    Three Essays on Experimental Economics and Individual Decision Making Under Risk

    Get PDF
    The dissertation looks at three topics that involve experimental economics methods or individual decision making under risk: how do people make educational decisions when facing the risk of drop out; which models best characterize individuals\u27 decision processes under risk; how can physicians improve discharge decisions to reduce the risk of unplanned readmissions. In the first chapter, I introduce the risk of dropout into Spence’s job market signaling model and test the modified model in the laboratory. I look at equilibria in the labor market and discuss the refinement based on the Cho-Kreps Intuitive Criterion. I derive the condition under which a separating equilibrium is the only perfect Bayesian equilibrium that survives the refinement and discuss the effects of workers\u27 risk preferences on these equilibrium predictions. The data from lab experiments show that the market reaches the separating equilibrium more often when it is the only intuitive equilibrium. I also observe that, when the share of the low-ability type in the worker population decreases, or the cost to pursue a degree increases, the size of the wage premium for having the degree generally decreases. In the experiments, I use binary lottery tasks to elicit subjects\u27 risk preferences to explain their strategies in the signaling games, and the analyses partially confirm the prediction that more risk-averse individuals pursue a higher degree less frequently in the presence of dropout risks. In the second chapter, as part of a joint project with Dr. Glenn W. Harrison and Dr. Rusty Tchernis, we apply the Bayesian econometric method to estimation of individual preferences under risk. We estimate a mixture model of Expected Utility Theory and Cumulative Prospect Theory using both simulated and observed binary lottery choices. We develop Markov Chain Monte Carlo algorithms to sample from the posterior distribution of parameters in the mixture model and compare the performances of different algorithms. The algorithms generally recover the true parameters used in the simulation, although some algorithms outperformed others in terms of efficiency. We also apply the algorithms to estimation using actual choice data. We find that 56.5% of the subjects can be characterized as consistent with Expected Utility Theory and 43.5% with Cumulative Prospect Theory. We find modest risk aversion among Expected Utility maximizers, and overweighting on the probabilities of extreme outcomes with very mild loss aversion among Cumulative Prospective Utility maximizers. In the third chapter, coauthored with Dr. Ira L. Leeds, Dr. Vjollca Sadiraj, Dr. James C. Cox, Dr. Timothy M. Pawlik, Dr. Kurt E. Schnier and Dr. John F. Sweeney, we sought to define the association between information used for hospital discharge and patients\u27 subsequent risk of unplanned readmission. De-identified data for patients from a tertiary academic medical center\u27s surgical services were analyzed using a time-to-event model to identify criteria that statistically explained the timing of discharges. The data were subsequently used to develop a prediction model of unplanned hospital readmissions. Comparison of discharge behaviors versus the predictive readmission model suggested important discordance with certain clinical measures not being accounted for to optimize discharges. We suggest that decision-support tools for discharge may utilize variables that are not routinely considered by healthcare providers

    Datenbasierte Ansätze für moderne klinische Risikovorhersagen

    Get PDF
    In this thesis the use of data scientific approaches in the life sciences is illustrated by means of contemporary prostate cancer risk models. Validation techniques are introduced and analytical confidence intervals for selected methods derived. In addition, diverse regression approaches to incorporate heterogeneous cohorts, an update of an online available risk calculator and machine learning methods are analyzed and compared.In der vorliegenden Arbeit wird der Einsatz von datenbasierten Ansätzen in den Lebenswissenschaften anhand von zeitgemäßen Risikomodellen für Prostatakrebs dargestellt. Validierungstechniken werden eingeführt und analytische Konfidenzintervalle für ausgewählte Methoden hergeleitet. Des Weiteren werden verschiedene Regressionsansätze zur Integration von heterogenen Kohorten, eine Aktualisierung eines online verfügbaren Risikorechners und Methoden des maschinellen Lernens analysiert und verglichen

    Robustness of Multiple Objective Decision Analysis Preference Functions

    Get PDF
    This research investigated value and utility functions in multiobjective decision analysis to examine the relationship between them in a military decision making context. The impact of these differences was examined to improve implementation efficiency. The robustness of the decision model was examined with respect to the preference functions to reduce the time burden imposed on the decision maker. Data for decision making in a military context supports the distinction between value and utility functions. Relationships between value and utility functions and risk attitudes were found to be complex. Elicitation error was significantly smaller than the difference between value and utility functions. Risk attitudes were generally neither constant across the domain of the evaluation measure nor consistent between evaluation measures. An improved measure of differences between preference functions, the weighted root means square, is introduced and a goodness of fit criterion established. An improved measure of risk attitudes employing utility functions is developed. Response Surface Methodology was applied to improve the efficiency of decision analysis utility model applications through establishing the robustness of decision models to the preference functions. An algorithm was developed and employs this information to provide a hybrid value-utility model that offers increased elicitation efficiency

    Optimising outcomes for potentially resectable pancreatic cancer through personalised predictive medicine : the application of complexity theory to probabilistic statistical modeling

    Get PDF
    Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models.Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models

    Gambling and Decision-Making Among Primates: The Primate Gambling Task

    Get PDF
    Humans have a tendency to engage in economically irrational behaviors such as gambling, which typically leads to long-term financial losses. While there has been much research on human gambling behavior, relatively little work has been done to explore the evolutionary origins of this behavior. To examine the adaptive pressures that may have led to this seemingly irrational behavior in humans, nonhuman primates were tested to explore their reactions to gambling type scenarios. Several experiments based on traditional human economic experiments were adapted for use with a wider variety of primate species including chimpanzees and capuchin monkeys. This allowed for testing multiple species using similar methodologies in order to make more accurate comparisons of species abilities. This series of tasks helps to elucidate risky decision-making behavior in three primate species

    Learner's dictionary for students and professionals : English for European public health

    Get PDF
    The idea of developing a Public Health Dictionary has accompanied me since the beginning of adventure with Public Health in 1992 when the fi rst School of Public Health was established in Krakow, Poland. It was not until we received funding from the Leonardo da Vinci Program that the idea could materialize. I gratefully record my obligation to all those who have contributed to this edition of the Public Health Learners’ Dictionary with suggestion, criticism and information. A special thanks should be given to the people who have made a contribution to the collection of words in this dictionary and who have given generously their time to writing defi nitions, suggesting terms, locating existing glossaries. Particular appreciation is due to the staff of the Jagiellonian University Medical Library. All of our collaborators are included in the list of authors, contributors, consultants and assistants
    • …
    corecore