4,411 research outputs found

    What does decision making with intervals really assume? The relationship between the Hurwicz decision rule and prescriptive decision analysis

    Get PDF
    Decision analysis can be defined as a discipline where a decision maker chooses the best alternative by considering the decision maker’s values and preferences and by breaking down a complex decision problem into simple or constituent ones. Decision analysis helps an individual make better decisions by structuring the problem. Non-probabilistic approaches to decision making have been proposed for situations in which an individual does not have enough information to assess probabilities over an uncertainty. One non-probabilistic method is to use intervals in which an uncertainty has a minimum and maximum but nothing is assumed about the relative likelihood of any value within the interval. The Hurwicz decision rule in which a parameter trades off between pessimism and optimism generalizes the current rules for making decisions with intervals. This thesis analyzes the relationship between intervals based on the Hurwicz rule and traditional decision analysis using probabilities and utility functions. This thesis shows that the Hurwicz decision rule for an interval is logically equivalent to: (i) an expected value decision with a triangle distribution over the interval; (ii) an expected value decision with a beta distribution; and (iii) an expected utility decision with a uniform distribution. The results call into question whether decision making based on intervals really assumes less information than subjective expected utility decision making. If an individual is using intervals to select an alternative—for which the interval decision rule can be described with the Hurwicz equation—then the individual is implicitly assuming a probability distribution such as a triangle or beta distribution or a utility function expressing risk preference

    The Hurwicz decision rule’s relationship to decision making with the triangle and beta distributions and exponential utility

    Get PDF
    Non-probabilistic approaches to decision making have been proposed for situations in which an individual does not have enough information to assess probabilities over an uncertainty. One non-probabilistic method is to use intervals in which an uncertainty has a minimum and maximum but nothing is assumed about the relative likelihood of any value within the interval. The Hurwicz decision rule in which a parameter trades off between pessimism and optimism generalizes the current rules for making decisions with intervals. This article analyzes the relationship between intervals based on the Hurwicz rule and traditional decision analysis using a few probability distributions and an exponential utility functions. This article shows that the Hurwicz decision rule for an interval is logically equivalent to: (i) an expected value decision with a triangle distribution over the interval; (ii) an expected value decision with a beta distribution; and (iii) an expected utility decision with constant absolute risk aversion with a uniform distribution. These probability distributions are not exhaustive. There are likely other distributions and utility functions for which equivalence with the Hurwicz decision rule can also be established. Since a frequent reason for the use intervals is that intervals assume less information than a probability distribution, the results in this article call into question whether decision making based on intervals really assumes less information than subjective expected utility decision making

    Uncertainty modelling in power spectrum estimation of environmental processes

    Get PDF
    For efficient reliability analysis of buildings and structures, robust load models are required in stochastic dynamics, which can be estimated in particular from environmental processes, such as earthquakes or wind loads. To determine the response behaviour of a dynamic system under such loads, the power spectral density (PSD) function is a widely used tool for identifying the frequency components and corresponding amplitudes of environmental processes. Since the real data records required for this purpose are often subject to aleatory and epistemic uncertainties, and the PSD estimation process itself can induce further uncertainties, a rigorous quantification of these is essential, as otherwise a highly inaccurate load model could be generated which may yield in misleading simulation results. A system behaviour that is actually catastrophic can thus be shifted into an acceptable range, classifying the system as safe even though it is exposed to a high risk of damage or collapse. To address these issues, alternative loading models are proposed using probabilistic and non-deterministic models, that are able to efficiently account for these uncertainties and to model the loadings accordingly. Various methods are used in the generation of these load models, which are selected in particular according to the characteristic of the data and the number of available records. In case multiple data records are available, reliable statistical information can be extracted from a set of similar PSD functions that differ, for instance, only slightly in shape and peak frequency. Based on these statistics, a PSD function model is derived utilising subjective probabilities to capture the epistemic uncertainties and represent this information effectively. The spectral densities are characterised as random variables instead of employing discrete values, and thus the PSD function itself represents a non-stationary random process comprising a range of possible valid PSD functions for a given data set. If only a limited amount of data records is available, it is not possible to derive such reliable statistical information. Therefore, an interval-based approach is proposed that determines only an upper and lower bound and does not rely on any distribution within these bounds. A set of discrete-valued PSD functions is transformed into an interval-valued PSD function by optimising the weights of pre-derived basis functions from a Radial Basis Function Network such that they compose an upper and lower bound that encompasses the data set. Therefore, a range of possible values and system responses are identified rather than discrete values, which are able to quantify the epistemic uncertainties. When generating such a load model using real data records, the problem can arise that the individual records exhibit a high spectral variance in the frequency domain and therefore differ too much from each other, although they appear to be similar in the time domain. A load model derived from these data may not cover the entire spectral range and is therefore not representative. The data are therefore grouped according to their similarity using the Bhattacharyya distance and k-means algorithm, which may generate two or more load models from the entire data set. These can be applied separately to the structure under investigation, leading to more accurate simulation results. This approach can also be used to estimate the spectral similarity of individual data sets in the frequency domain, which is particularly relevant for the load models mentioned above. If the uncertainties are modelled directly in the time signal, it can be a challenging task to transform them efficiently into the frequency domain. Such a signal may consist only of reliable bounds in which the actual signal lies. A method is presented that can automatically propagate this interval uncertainty through the discrete Fourier transform, obtaining the exact bounds on the Fourier amplitude and an estimate of the PSD function. The method allows such an interval signal to be propagated without making assumptions about the dependence and distribution of the error over the time steps. These novel representations of load models are able to quantify epistemic uncertainties inherent in real data records and induced due to the PSD estimation process. The strengths and advantages of these approaches in practice are demonstrated by means of several numerical examples concentrated in the field of stochastic dynamics.Für eine effiziente Zuverlässigkeitsanalyse von Gebäuden und Strukturen sind robuste Belastungsmodelle in der stochastischen Dynamik erforderlich, die insbesondere aus Umweltprozessen wie Erdbeben oder Windlasten geschätzt werden können. Um das Antwortverhalten eines dynamischen Systems unter solchen Belastungen zu bestimmen, ist die Funktion der Leistungsspektraldichte (PSD) ein weit verbreitetes Werkzeug zur Identifizierung der Frequenzkomponenten und der entsprechenden Amplituden von Umweltprozessen. Da die zu diesem Zweck benötigten realen Datensätze häufig mit aleatorischen und epistemischen Unsicherheiten behaftet sind und der PSD-Schätzprozess selbst weitere Unsicherheiten induzieren kann, ist eine strenge Quantifizierung dieser Unsicherheiten unerlässlich, da andernfalls ein sehr ungenaues Belastungsmodell erzeugt werden könnte, das zu fehlerhaften Simulationsergebnissen führen kann. Ein eigentlich katastrophales Systemverhalten kann so in einen akzeptablen Bereich verschoben werden, so dass das System als sicher eingestuft wird, obwohl es einem hohen Risiko der Beschädigung oder des Zusammenbruchs ausgesetzt ist. Um diese Probleme anzugehen, werden alternative Belastungsmodelle vorgeschlagen, die probabilistische und nicht-deterministische Modelle verwenden, welche in der Lage sind, diese Unsicherheiten effizient zu berücksichtigen und die Belastungen entsprechend zu modellieren. Bei der Erstellung dieser Lastmodelle werden verschiedene Methoden verwendet, die insbesondere nach dem Charakter der Daten und der Anzahl der verfügbaren Datensätze ausgewählt werden. Wenn mehrere Datensätze verfügbar sind, können zuverlässige statistische Informationen aus einer Reihe ähnlicher PSD-Funktionen extrahiert werden, die sich z.B. nur geringfügig in Form und Spitzenfrequenz unterscheiden. Auf der Grundlage dieser Statistiken wird ein Modell der PSD-Funktion abgeleitet, das subjektive Wahrscheinlichkeiten verwendet, um die epistemischen Unsicherheiten zu erfassen und diese Informationen effektiv darzustellen. Die spektralen Leistungsdichten werden als Zufallsvariablen charakterisiert, anstatt diskrete Werte zu verwenden, somit stellt die PSD-Funktion selbst einen nicht-stationären Zufallsprozess dar, der einen Bereich möglicher gültiger PSD-Funktionen für einen gegebenen Datensatz umfasst. Wenn nur eine begrenzte Anzahl von Datensätzen zur Verfügung steht, ist es nicht möglich, solche zuverlässigen statistischen Informationen abzuleiten. Daher wird ein intervallbasierter Ansatz vorgeschlagen, der nur eine obere und untere Grenze bestimmt und sich nicht auf eine Verteilung innerhalb dieser Grenzen stützt. Ein Satz von diskret wertigen PSD-Funktionen wird in eine intervallwertige PSD-Funktion umgewandelt, indem die Gewichte von vorab abgeleiteten Basisfunktionen aus einem Radialbasisfunktionsnetz so optimiert werden, dass sie eine obere und untere Grenze bilden, die den Datensatz umfassen. Damit wird ein Bereich möglicher Werte und Systemreaktionen anstelle diskreter Werte ermittelt, welche in der Lage sind, epistemische Unsicherheiten zu erfassen. Bei der Erstellung eines solchen Lastmodells aus realen Datensätzen kann das Problem auftreten, dass die einzelnen Datensätze eine hohe spektrale Varianz im Frequenzbereich aufweisen und sich daher zu stark voneinander unterscheiden, obwohl sie im Zeitbereich ähnlich erscheinen. Ein aus diesen Daten abgeleitetes Lastmodell deckt möglicherweise nicht den gesamten Spektralbereich ab und ist daher nicht repräsentativ. Die Daten werden daher mit Hilfe der Bhattacharyya-Distanz und des k-means-Algorithmus nach ihrer Ähnlichkeit gruppiert, wodurch zwei oder mehr Belastungsmodelle aus dem gesamten Datensatz erzeugt werden können. Diese können separat auf die zu untersuchende Struktur angewandt werden, was zu genaueren Simulationsergebnissen führt. Dieser Ansatz kann auch zur Schätzung der spektralen Ähnlichkeit einzelner Datensätze im Frequenzbereich verwendet werden, was für die oben genannten Lastmodelle besonders relevant ist. Wenn die Unsicherheiten direkt im Zeitsignal modelliert werden, kann es eine schwierige Aufgabe sein, sie effizient in den Frequenzbereich zu transformieren. Ein solches Signal kann möglicherweise nur aus zuverlässigen Grenzen bestehen, in denen das tatsächliche Signal liegt. Es wird eine Methode vorgestellt, mit der diese Intervallunsicherheit automatisch durch die diskrete Fourier Transformation propagiert werden kann, um die exakten Grenzen der Fourier-Amplitude und der Schätzung der PSD-Funktion zu erhalten. Die Methode ermöglicht es, ein solches Intervallsignal zu propagieren, ohne Annahmen über die Abhängigkeit und Verteilung des Fehlers über die Zeitschritte zu treffen. Diese neuartigen Darstellungen von Lastmodellen sind in der Lage, epistemische Unsicherheiten zu quantifizieren, die in realen Datensätzen enthalten sind und durch den PSD-Schätzprozess induziert werden. Die Stärken und Vorteile dieser Ansätze in der Praxis werden anhand mehrerer numerischer Beispiele aus dem Bereich der stochastischen Dynamik demonstriert

    Contributions to modeling with set-valued data: benefitting from undecided respondents

    Get PDF
    This dissertation develops a methodological framework and approaches to benefit from undecided survey participants, particularly undecided voters in pre-election polls. As choices can be seen as processes that - in stages - exclude alternatives until arriving at one final element, we argue that in pre-election polls undecided participants can most suitably be represented by the set of their viable options. This consideration set sampling, in contrast to the conventional neglection of the undecided, could reduce nonresponse and collects new and valuable information. We embed the resulting set-valued data in the framework of random sets, which allows for two different interpretations, and develop modeling methods for either one. The first interpretation is called ontic and views the set of options as an entity of its own that most accurately represents the position at the time of the poll, thus as a precise representation of something naturally imprecise. With this, new ways of structural analysis emerge as individuals pondering between particular parties can now be examined. We show how the underlying categorical data structure can be preserved in this formalization process for specific models and how popular methods for categorical data analysis can be broadly transferred. As the set contains the eventual choice, under the second interpretation, the set is seen as a coarse version of an underlying truth, which is called the epistemic view. This imprecise information of something actually precise can then be used to improve predictions or election forecasting. We developed several approaches and a framework of a factorized likelihood to utilize the set-valued information for forecasting. Amongst others, we developed methods addressing the complex uncertainty induced by the undecided, weighting the justifiability of assumptions with the conciseness of the results. To evaluate and apply our approaches, we conducted a pre-election poll for the German federal election of 2021 in cooperation with the polling institute Civey, for the first time regarding undecided voters in a set-valued manner. This provides us with the unique opportunity to demonstrate the advantages of the new approaches based on a state-of-the-art survey

    Monte Carlo and fuzzy interval propagation of hybrid uncertainties on a risk model for the design of a flood protection dike

    No full text
    International audienceA risk model may contain uncertainties that may be best represented by probability distributions and others by possibility distributions. In this paper, a computational framework that jointly propagates probabilistic and possibilistic uncertainties is compared with a pure probabilistic uncertainty propagation. The comparison is carried out with reference to a risk model concerning the design of a flood protection dike

    Genetic algorithms for condition-based maintenance optimization under uncertainty

    Get PDF
    International audienceThis paper proposes and compares different techniques for maintenance optimization based on Genetic Algorithms (GA), when the parameters of the maintenance model are affected by uncertainty and the fitness values are represented by Cumulative Distribution Functions (CDFs). The main issues addressed to tackle this problem are the development of a method to rank the uncertain fitness values, and the definition of a novel Pareto dominance concept. The GA-based methods are applied to a practical case study concerning the setting of a condition-based maintenance policy on the degrading nozzles of a gas turbine operated in an energy production plant

    Distribution-free stochastic simulation methodology for model updating under hybrid uncertainties

    Get PDF
    In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019

    A resource allocation model for deep uncertainty (RAM-DU) with application to the Deepwater Horizon oil spill

    Get PDF
    Deep uncertainty usually refers to problems with epistemic uncertainty in which the analyst or decision maker has very little information about the system, data are severely lacking, and different mathematical models to describe the system may be possible. Since little information is available to forecast the future, selecting probability distributions to represent this uncertainty is very challenging. Traditional methods of decision making with uncertainty may not be appropriate for deep uncertainty problems. This paper introduces a novel approach to allocate resources within complex and very uncertain situations. The resource allocation model for deep uncertainty (RAM-DU) incorporates different types of uncertainty (e.g., parameter, structural, model uncertainty) and can consider every possible model, different probability distributions, and possible futures. Instead of identifying a single optimal alternative as in most resource allocation models, RAM-DU recommends an interval of allocation amounts. The RAM-DU solution generates an interval for one or multiple decision variables so that the decision maker can allocate any amount within that interval and still ensure that the objective function is within a predefined level of optimality for all the different parameters, models, and futures under consideration. RAM-DU is applied to allocating resources to prepare for and respond to a Deepwater Horizon-type oil spill. The application identifies allocation intervals for how much should be spent to prepare for this type of oil spill and how much should be spent to help industries recover from the spill
    • …
    corecore