4,266 research outputs found
Recommended from our members
Bounds on survival probability given mean probability of failure per demand; And the paradoxical advantages of uncertainty
When deciding whether to accept into service a new safety-critical system, or choosing between alternative systems, uncertainty about the parameters that affect future failure probability may be a major problem. This uncertainty can be extreme if there is the possibility of unknown design errors (e.g. in software), or wide variation between nominally equivalent components.
We study the effect of parameter uncertainty on future reliability (survival probability), for systems required to have low risk of even only one failure or accident over the long term (e.g. their whole operational lifetime) and characterised by a single reliability parameter (e.g. probability of failure per demand - pfd). A complete mathematical treatment requires stating a probability distribution for any parameter with uncertain value. This is hard, so calculations are often performed using point estimates, like the expected value.
We investigate conditions under which such simplified descriptions yield reliability values that are sure to be pessimistic (or optimistic) bounds for a prediction based on the true distribution. Two important observations are: (i) using the expected value of the reliability parameter as its true value guarantees a pessimistic estimate of reliability, a useful property in most safety-related decisions; (ii) with a given expected pfd, broader distributions (in a formally defined meaning of "broader"), that is, systems that are a priori "less predictable", lower the risk of failures or accidents.
Result (i) justifies the simplification of using a mean in reliability modelling; we discuss within which scope this justification applies, and explore related scenarios, e.g. how things improve if we can test the system before operation. Result (ii) offers more flexible ways of bounding reliability predictions, but also has important, often counter-intuitive implications for decision making in various areas, like selection of components, project management, and product acceptance or licensing. For instance, in regulatory decision making dilemmas may arise in which the goal of minimising risk runs counter to other commonly held priorities, like predictability of risk; in safety assessment using expert opinion, the commonly recognised risk of experts being "overconfident" may be less dangerous than their being underconfident
Recommended from our members
Loss-size and Reliability Trade-offs Amongst Diverse Redundant Binary Classifiers
Many applications involve the use of binary classifiers, including applications where safety and security are critical. The quantitative assessment of such classifiers typically involves receiver operator characteristic (ROC) methods and the estimation of sensitivity/specificity. But such techniques have their limitations. For safety/security critical applications, more relevant measures of reliability and risk should be estimated. Moreover, ROC techniques do not explicitly account for: 1) inherent uncertainties one faces during assessments, 2) reliability evidence other than the observed failure behaviour of the classifier, and 3) how this observed failure behaviour alters one's uncertainty about classifier reliability. We address these limitations using conservative Bayesian inference (CBI) methods, producing statistically principled, conservative values for risk/reliability measures of interest. Our analyses reveals trade-offs amongst all binary classifiers with the same expected loss { the most reliable classifiers are those most likely to experience high impact failures. This trade-off is harnessed by using diverse redundant binary classifiers
Interval reliability inference for multi-component systems
This thesis is a collection of investigations on applications of imprecise probability theory to system reliability engineering with emphasis on using survival signatures for modelling complex systems. Survival signatures provide efficient representation of system structure and facilitate several reliability assessments by separating the computationally expensive combinatorial part from the subsequent evaluations submitted to only polynomial complexity. This proves useful for situations which also account for the statistical inference on system component lifetime distributions where Bayesian methods require repeated numerical propagation for the samples from the posterior distribution. Similarly, statistical methods involving imprecise probabilistic models composed of sets of precise probability distributions also benefit from the simplification by the signature representation. We will argue the pragmatic benefits of using statistical models based on imprecise probability models in reliability engineering from the perspective of inferential validity and provision of objective guarantees for the statistical procedures. Imprecise probability methods generally require solving an optimization problem to obtain bounds on the assessments of interest, but monotone system structures simplify them without much additional complexity. This simplification extends to survival signature models, therefore many reliability assessments with imprecise (interval) component lifetime models tend to be tractable as will be demonstrated on several examples
Hospital performance including quality: creating economic incentives consistent with evidence-based medicine
This thesis addresses questions of how to incorporate quality of care, represented by
disutility-bearing effects such as mortality, morbidity and re-admission, in measuring
relative performance of public hospitals. Currently, case-mix funding and performance,
measured with costs per case-mix adjusted separation, hold hospitals accountable for
costs, but not effects, of care, creating economic incentives for quality of care minimising
cost per admission.
To allow an appropriate trade-off between the value and cost of quality of care a
correspondence is demonstrated between maximising net benefit and minimising costs
plus decision makersโ value of disutility events, where effects of care can be represented
by disutility events and hospitals face a common comparator. Applying this
correspondence to performance measurement, frontier methods specifying disutility
events as inputs are illustrated to have distinct advantages over output specifications,
allowing estimation of:
1. economic efficiency conditional on the value of avoiding disutility events.
2. technical, scale and congestion sources of net benefit efficiency;
3. best practice peers over potential decision makersโ value of quality; and
4. industry shadow price of avoiding disutility events.
The accountability this performance measurement framework provides for effects and
cost of quality of care are also illustrated as the basis for moving from case-mix funding
towards a funding mechanism based on maximising net benefit. Links to evidence-based medicine in health technology assessment are emphasised in illustrating application of
the correspondence to comparison of multiple strategies in the cost-disutility plane, where
radial properties as shown to provide distinct advantages over comparison in the cost-effectiveness plane.
The identified performance measurement and funding framework allows policy makers to
create economic incentives consistent with evidence-based medicine in practice, while
avoiding incentives for cream-skimming and cost-shifting. The linear nature of the net
benefit correspondence theorem allows simple inclusion of multiple effects of quality,
whether expressed as not meeting a standard, functional limitation or disutility directly.
In applying the net benefit correspondence theorem to hospitals a clinical activity level is
suggested, to allow correspondence conditions to be robustly satisfied in identification of
effects with decision analytic methods, adjustment for within DRG risk factors and data
linkage to effects beyond separation
Social Welfare
"Social Welfare" offers, for the first time, a wide-ranging, internationally-focused selection of cutting-edge work from leading academics. Its interdisciplinary approach and comparative perspective promote examination of the most pressing social welfare issues of the day. The book aims to clarify some of the ambiguity around the term, discuss the pros and cons of privatization, present a range of social welfare paradoxes and innovations, and establish a clear set of economic frameworks with which to understand the conditions under which the change in social welfare can be obtained
Entrepreneurial Action and Entrepreneurial Rents
This dissertation is comprised of three independently standing research papers (chapters 2, 3 and 4) that come together in the common theme of investigating the relationship between entrepreneurial action and performance. The introduction chapter argues that this theme is the main agenda of an entrepreneurial approach to strategy, and provides some background and context for the core chapters. The entrepreneurial approach to strategy falls in line with an array of action-based theories of strategy that trace their economic foundations to the Austrian school of economics. Chapters 2 and 3 take a game theoretical modeling and computer simulation approach and represent one of the first attempts at formal analysis of the Austrian economic foundations of action-based strategy theory. These chapters attempt to demonstrate ways in which formal analysis can begin to approach compatibility with the central tenets of Austrian economics (i.e., subjectivism, dynamism, and methodological individualism). The simulation results shed light on our understanding of the relationship between opportunity creation and discovery, and economic rents in the process of moving towards and away from equilibrium. Chapter 4 operationalizes creation and discovery as exploration and exploitation in an empirical study using data from the Kauffman Firm Survey and highlights the trade-offs faced by start-ups in linking action to different dimensions of performance (i.e., survival, profitability, and getting acquired). Using multinomial logistic regression for competing risks analysis and random effects panel data regression, we find that high technology start-ups face a trade-off between acquisition likelihood and profitability-given-survival while low and medium technology start-ups face a trade-off between survival and profitability-given-survival. Chapter 5 concludes the dissertation by highlighting some of the overall contributions and suggesting avenues for future research
๊ด์ธกํ ์ ์๋ ๋ณ๋ ํจ๊ณผ์ ๊ดํ ํต๊ณ์ ์ถ๋ก
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ์์ฐ๊ณผํ๋ํ ํต๊ณํ๊ณผ, 2023. 8. ์์ํ.This thesis is composed of six topics related to statistical inference on unobserved random effects, each centered around the concept of extended likelihood that incorporates information about the random unknowns. The first two topics focus on the theoretical properties of confidence distribution, whose density can be interpreted as an extended likelihood. The latter four topics reformulate the hierarchical likelihood, as an extended likelihood at specific scale, and investigate its theoretical properties, as well as its applications to deep learning.
In the first topic, an epistemic confidence of the observed confidence intervals is introduced. Furthermore, the relevant subset problem associated is explained by incorporating the existence of betting markets into the Ramsey-De Finetti's Dutch book argument. It is demonstrated that the epistemic confidence is free from such issues. In the second topic, it is revealed that the existence of a point mass in confidence distribution plays an important role in maintaining the essential properties of the confidence distribution. The point mass has been considered paradoxical in Stein's paradox and satellite collision problems, but in fact, it gives an advantage to the confidence distribution. The proposed confidence distribution is free from the false confidence for the parameter of interest, and it maintains the confidence feature in both Stein's problem and the satellite conjunction problem.
The third topic introduces the reformulation of hierarchical likelihood (h-likelihood) and establishes the theoretical properties for h-likelihood inference. This novel hierarchical likelihood can provide maximum likelihood estimators for fixed parameters and asymptotically the best unbiased estimators for random parameters, resolving the ambiguity of Lee and Nelder's (1996) original hierarchical likelihood. The last three topics deal with applications of the h-likelihood approach to deep learning models. While the most deep learning models implicitly assume the independence of data, real-world large scale data is often clustered with temporal-spatial correlations. In such cases, prediction performance of deep learning models can be improved by introducing random effects via h-likelihood. The fourth topic deals with deep learning models for continuous data with temporal-spatial correlations, and the fifth topic focuses on deep learning models for count data with non-Gaussian random effects. The sixth topic proposes h-likelihood approach to semi-parametric deep neural networks with gamma frailty for analyzing clustered censored data. In all three topics, the proposed methods improve prediction performance by introducing random effects into the existing deep learning models.์ด ๋
ผ๋ฌธ์ ๊ด์ธกํ ์ ์๋ ๋ณ๋ ํจ๊ณผ์ ๋ํ ํต๊ณ์ ์ถ๋ก ์ ๊ด๋ จ๋ ์ฌ์ฏ ๊ฐ์ ์ฃผ์ ๋ก ๊ตฌ์ฑ๋์ด ์์ผ๋ฉฐ, ๊ฐ๊ฐ์ ์ฃผ์ ๋ ๊ด์ธกํ ์ ์๋ ๋ณ๋์ ๊ดํ ์ ๋ณด๋ฅผ ๊ฐ๊ณ ์๋ ํ์ฅ๋ ๊ฐ๋ฅ๋(extended likelihood)๋ฅผ ์ค์ฌ์ผ๋ก ์ฐ๊ฒฐ๋์ด ์๋ค. ์ ๋ฐ๋ถ์ ๋ ์ฃผ์ ๋ ์ ๋ขฐ๋(confidence) ์ด๋ก ์ ๊ดํ ๋ด์ฉ์ผ๋ก, ์ ๋ขฐ๋ถํฌ์ ๋ฐ๋ํจ์(confidence density)๋ฅผ ํ์ฅ๋ ๊ฐ๋ฅ๋๋ก ํด์ํ์ฌ, ์ ๋ขฐ๋์ ๊ดํ ์ด๋ก ์ ์ฑ์ง์ ๊ท๋ช
ํ์๋ค. ํ๋ฐ๋ถ์ ๋ค ์ฃผ์ ๋ ํน์ํ ์ค์ผ์ผ์์์ ํ์ฅ๋ ๊ฐ๋ฅ๋๋ก ์ ์๋๋ ๊ณ์ธต์ ๊ฐ๋ฅ๋(hierarchical likelihood)์ ๊ดํ ์ด๋ก ์ ์ฑ์ง๊ณผ ๋ฅ๋ฌ๋์ผ๋ก์ ํ์ฅ ๋ฐ ์์ฉ์ ๊ดํ ๋ด์ฉ์ ๋ค๋ฃฌ๋ค.
์ฒซ๋ฒ์งธ ์ฃผ์ ์์๋ ๊ด์ธก๋ ์ ๋ขฐ๊ตฌ๊ฐ์ ๋ํด ์ธ์๋ก ์ ์ ๋ขฐ๋(epistemic confidence)๋ฅผ ์ ์ํ๊ณ ์ด๋ฅผ ๊ณ์ฐํ๊ธฐ ์ํ ๋ฐฉ๋ฒ์ ์ ์ํ์๋ค. ๋ํ, ๋น๋์ฃผ์์ ๊ด์ ์์ ์ ์๋๋ ์ ๋ขฐ๋๊ฐ ๊ฐ๋ relevant subset ๋ฌธ์ ๋ฅผ Ramsey-De Finetti์ Dutch book ๋
ผ์์ betting market์ ์กด์ฌ๋ฅผ ๋์
ํจ์ผ๋ก์จ ์ค๋ช
ํ๊ณ , ์ธ์๋ก ์ ์ ๋ขฐ๋๊ฐ ์ด๋ฌํ ๋ฌธ์ ๋ก๋ถํฐ ์์ ๋ก์ธ ์ ์์์ ๋ณด์๋ค. ๋๋ฒ์งธ ์ฃผ์ ์์๋ Stein์ ์ญ์ค๊ณผ ์ธ๊ณต์์ฑ ์ถฉ๋ ๋ฌธ์ ๋ฅผ ํตํด ์ ๋ขฐ๋ถํฌ๊ฐ ํน์ ์ง์ ์์ point mass๋ฅผ ๊ฐ๋ ๋ฌธ์ ๋ฅผ ์๋ก์ด ๊ด์ ์์ ํด์ํ์ฌ, ์ญ์ค์ ์ผ๋ก ์ฌ๊ฒจ์ง๋ point mass์ ์กด์ฌ๊ฐ ์ ๋ขฐ๋ถํฌ์ ํต์ฌ์ ์ธ ์ฑ์ง์ ์ ์งํ๋๋ก ๋ง๋ค์ด์ฃผ๋ ๋ฐ ์ค์ํ ์ญํ ์ ํ๋ค๋ ๊ฒ์ ๋ฐํ๋ค. ์ด์ ๋๋ถ์ด, ์ ์ํ ํํ์ ์ ๋ขฐ ๋ถํฌ๊ฐ ํ๋ฅ ํํ์ ์ถ๋ก ์ด ๊ฐ๋ ๊ทผ๋ณธ์ ์ธ ํ๊ณ์ ์ผ๋ก ์ง์ ๋ ๊ฑฐ์ง ์ ๋ขฐ(false confidence) ๋ฌธ์ ์์ (์ ์ด๋ ๋ชฉํ ๊ฐ์ค์ ํํด) ์์ ๋กญ๋ค๋ ๊ฒ์ ๋ฐํ๊ณ , ๊ธฐ์กด์ ๋ค๋ฅธ ๋ฐฉ๋ฒ๋ก ๋ค๊ณผ ๋ฌ๋ฆฌ Stein ๋ฌธ์ ๋ฐ ์ธ๊ณต์์ฑ ์ถฉ๋ ๋ฌธ์ ์์ ์ ์ ์ ๋ขฐ๋๋ฅผ ์ ์งํ ์ ์์์ ๋ณด์๋ค.
์ธ๋ฒ์งธ ์ฃผ์ ์์๋ ๊ณ์ธต ๊ฐ๋ฅ๋๋ฅผ ์๋กญ๊ฒ ์ ์ํ๊ณ ์ด๋ก ์ ์ฑ์ง์ ๊ท๋ช
ํ์๋ค. ์๋ก์ด ๊ณ์ธต ๊ฐ๋ฅ๋๋ ๊ณ ์ ํจ๊ณผ์ ์ต๋ ๊ฐ๋ฅ๋ ์ถ์ ๋๊ณผ ๋ณ๋ ํจ๊ณผ์ ์ ๊ทผ์ ์ต์๋ถ์ฐ ๋ถํธ์ถ์ ๋์ ์ ๊ณตํ ์ ์์ผ๋ฉฐ, ๊ธฐ์กด์ ๊ณ์ธต ๊ฐ๋ฅ๋๊ฐ ๊ฐ๊ณ ์๋ ์ด๋ก ์ ๋ชจํธ์ฑ์ ํด์ํ ์ ์๋ค. ๋ง์ง๋ง ์ธ ์ฃผ์ ๋ ์๋กญ๊ฒ ์ ์ํ ๊ณ์ธต ๊ฐ๋ฅ๋๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ ๋ฅ๋ฌ๋ ๋ชจํ์ ๋ค๋ฃจ๊ณ ์๋ค. ๋๋ถ๋ถ์ ๋ฅ๋ฌ๋ ๋ชจํ๋ค์ด ๋ฐ์ดํฐ์ ๋
๋ฆฝ์ฑ์ ์๋ฌต์ ์ผ๋ก ๊ฐ์ ํ๊ณ ์์ง๋ง, ์ค์ ๋ฐ์ดํฐ๋ ์๊ณต๊ฐ์ ์๊ด๊ด๊ณ๋ฅผ ๊ฐ๋ ๊ฒฝ์ฐ๊ฐ ๋ง๋ค. ๋ฅ๋ฌ๋ ๋ชจํ์ ๋ณ๋ ํจ๊ณผ๋ฅผ ๋์
ํจ์ผ๋ก์จ ์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ์ ์์ผ๋ฉฐ, ๊ณ์ธต ๊ฐ๋ฅ๋ ๊ธฐ๋ฐ ์ ๊ทผ๋ฒ์ ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ก ์ ๋นํด ์ฌ๋ฌ๊ฐ์ง ์ฅ์ ์ ๊ฐ๋๋ค. ๋ค๋ฒ์งธ ์ฃผ์ ์์๋ ์๊ณต๊ฐ ์๊ด๊ด๊ณ๋ฅผ ๊ฐ๋ ์ฐ์ํ ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๊ธฐ ์ํ ๋ฅ๋ฌ๋ ๋ชจํ์ ๋ค๋ฃจ์๊ณ , ๋ค์ฏ๋ฒ์งธ ์ฃผ์ ์์๋ ๋น์ ๊ท ๋ณ๋ ํจ๊ณผ๋ฅผ ๊ฐ๋ ๊ฐ์ฐํ ๋ฐ์ดํฐ๋ฅผ ๋ค๋ฃจ๊ธฐ ์ํ ๋ฅ๋ฌ๋ ๋ชจํ์ ๋ค๋ฃจ์๋ค. ์ฌ์ฏ๋ฒ์งธ ์ฃผ์ ์์๋ ๊ตฐ์งํ๋ ์ ๋จ์๋ฃ๋ฅผ ๋ถ์ํ๊ธฐ ์ํ ๊ณ์ธต ๊ฐ๋ฅ๋ ๊ธฐ๋ฐ ์ค๋ชจ์์ (semi-parameteric) ์ ๊ทผ๋ฒ์ ๋ํด ๋ค๋ฃจ์๋ค. ์ธ ์ฃผ์ ๋ชจ๋, ์ ์ํ ๋ฐฉ๋ฒ์ ํตํด ๋ฅ๋ฌ๋ ๋ชจํ์ ๋ณ๋ ํจ๊ณผ๋ฅผ ๋์
ํจ์ผ๋ก์จ ๊ธฐ์กด ๋ฐฉ๋ฒ๋ก ์ ์์ธก ์ฑ๋ฅ์ ํฅ์์ํฌ ์ ์์๋ค.Abstract i
List of Figures viii
List of Tables xii
1 Introduction 1
2 Epistemic confidence in the observed confidence interval 6
2.1 Introduction 6
2.2 Main theory 10
2.2.1 Relevant subsets 10
2.2.2 Confidence distribution 12
2.2.3 Main theorem 15
2.2.4 Ancillary statistics 20
2.2.5 Computation of epistemic confidence 23
2.3 Examples 24
2.3.1 Simple model 24
2.3.2 Location family model 25
2.3.3 Exponential family model 26
2.4 Discussion 35
2.5 Appendix 37
2.5.1 Curved exponential model: N(ฮธ, ฮธ2) 37
2.5.2 Discrete case 41
2.5.3 When maximal ancillaries are not unique 44
3 Point mass in confidence distributions 47
3.1 Introduction 47
3.2 Ambiguity in confidence of an observed CI 50
3.3 GFD and probability dilution 54
3.4 CD and related methods 57
3.4.1 CD and confidence level of an observed CI 59
3.4.2 CD versus GFD 62
3.4.3 CD versus RP 64
3.5 On false confidence. 66
3.5.1 False confidence and probability dilution 69
3.6 Hypothesis testing 71
3.7 Concluding remarks 74
3.8 Appendix 75
3.8.1 Proof of Theorem 3 175
4 Foundations of h-likelihood inference for random unknowns 77
4.1 Introduction 77
4.2 Hierarchical likelihood 79
4.2.1 Reformulation of H-Likelihood 82
4.2.2 Bartlizable scale of random effects 86
4.3 Main Results 91
4.4 H-likelihood theory for irregular cases 95
4.4.1 Missing data problem when v โ v = Op(1) 96
4.4.2 Exponential-exponential HGLM when ฮธ โ ฮธ = Op(1) 102
4.5 When the h-likelihood is not explicit. 107
4.6 Appendix 108
5 DNN with temporal-spatial random effects via h-likelihood 119
5.1 Introduction 119
5.2 Integrated likelihood approach for LMMs 121
5.3 H-likelihood approach for LMMs 124
5.4 Learning algorithm with the h-likelihood 128
5.4.1 REML procedure 132
5.4.2 Adjustments for random effects 133
5.5 Comparison with existing methods 134
5.6 Numerical studies 136
5.7 Real data analysis. 139
5.8 Concluding remarks 141
5.9 Appendix 143
5.9.1 The computation of h-likelihood when q1 is large 143
5.9.2 Methods-of-moments estimators 144
5.9.3 Technical details 144
6 DNN for clustered count data via h-likelihood 150
6.1 Introduction 150
6.2 Model Descriptions 152
6.2.1 Poisson DNN 152
6.2.2 Poisson-gamma DNN 153
6.3 Construction of h-likelihood 155
6.4 Learning algorithm with the h-likelihood 158
6.4.1 Loss function for online learning 158
6.4.2 The local minima problem 159
6.4.3 Pretraining variance components 160
6.5 Experimental Studies 161
6.6 Real Data Analysis 166
6.7 Concluding Remarks 168
6.8 Appendix 168
6.8.1 Convergence of the method-of-moments estimator 168
6.8.2 Technical details 170
7 DNN for semi-parametric frailty models via h-likelihood 173
7.1 Introduction 173
7.2 A review of DNN-Cox model 174
7.2.1 DNN-Cox model 174
7.2.2 Prediction measures 176
7.3 Proposed DNN for frailty model 178
7.3.1 DNN-FM 179
7.3.2 Construction of h-likelihood 179
7.4 Learning algorithm using the profiled h-likelihood 182
7.4.1 Local minima problem 184
7.4.2 ML learning algorithm 184
7.5 Experimental studies 186
7.5.1 Experimental design 186
7.5.2 Experimental results 188
7.6 Multi-center bladder cancer data 194
7.7 Concluding remarks 197
7.8 Appendix 198
7.8.1 Derivation for the predictive likelihood 198
7.8.2 Evaluation measures for DNN-FM 199
7.8.3 Online learning for the DNN-FM 201
7.8.4 Proofs 203
Bibliography 208
Abstract in Korean 224๋ฐ
Futures Studies in the Interactive Society
This book consists of papers which were prepared within the framework of the research project (No. T 048539) entitled Futures Studies in the Interactive Society (project leader: รva Hideg) and funded by the Hungarian Scientific Research Fund (OTKA) between 2005 and 2009. Some discuss the theoretical and methodological questions of futures studies and foresight; others present new approaches to or
procedures of certain questions which are very important and topical from the perspective of forecast and foresight practice. Each study was conducted in pursuit of improvement in futures fields
- โฆ