83,337 research outputs found
Using graphical models and multi-attribute utility theory for probabilistic uncertainty handling in large systems, with application to nuclear emergency management
Although many decision-making problems involve uncertainty, uncertainty handling within large decision support systems (DSSs) is challenging. One domain where uncertainty handling is critical is emergency response management, in particular nuclear emergency response, where decision making takes place in an uncertain, dynamically changing environment. Assimilation and analysis of data can help to reduce these uncertainties, but it is critical to do this in an efficient and defensible way. After briefly introducing the structure of a typical DSS for nuclear emergencies, the paper sets up a theoretical structure that enables a formal Bayesian decision analysis to be performed for environments like this within a DSS architecture. In such probabilistic DSSs many input conditional probability distributions are provided by different sets of experts overseeing different aspects of the emergency. These probabilities are then used by the decision maker (DM) to find her optimal decision. We demonstrate in this paper that unless due care is taken in such a composite framework, coherence and rationality may be compromised in a sense made explicit below. The technology we describe here builds a framework around which Bayesian data updating can be performed in a modular way, ensuring both coherence and efficiency, and provides sufficient unambiguous information to enable the DM to discover her expected utility maximizing policy
Decision-Making with Belief Functions: a Review
Approaches to decision-making under uncertainty in the belief function
framework are reviewed. Most methods are shown to blend criteria for decision
under ignorance with the maximum expected utility principle of Bayesian
decision theory. A distinction is made between methods that construct a
complete preference relation among acts, and those that allow incomparability
of some acts due to lack of information. Methods developed in the imprecise
probability framework are applicable in the Dempster-Shafer context and are
also reviewed. Shafer's constructive decision theory, which substitutes the
notion of goal for that of utility, is described and contrasted with other
approaches. The paper ends by pointing out the need to carry out deeper
investigation of fundamental issues related to decision-making with belief
functions and to assess the descriptive, normative and prescriptive values of
the different approaches
Intertemporal Choice of Fuzzy Soft Sets
This paper first merges two noteworthy aspects of choice. On the one hand, soft sets and fuzzy soft sets are popular models that have been largely applied to decision making problems, such as real estate valuation, medical diagnosis (glaucoma, prostate cancer, etc.), data mining, or international trade. They provide crisp or fuzzy parameterized descriptions of the universe of alternatives. On the other hand, in many decisions, costs and benefits occur at different points in time. This brings about intertemporal choices, which may involve an indefinitely large number of periods. However, the literature does not provide a model, let alone a solution, to the intertemporal problem when the alternatives are described by (fuzzy) parameterizations. In this paper, we propose a novel soft set inspired model that applies to the intertemporal framework, hence it fills an important gap in the development of fuzzy soft set theory. An algorithm allows the selection of the optimal option in intertemporal choice problems with an infinite time horizon. We illustrate its application with a numerical example involving alternative portfolios of projects that a public administration may undertake. This allows us to establish a pioneering intertemporal model of choice in the framework of extended fuzzy set theorie
Interpretable multiclass classification by MDL-based rule lists
Interpretable classifiers have recently witnessed an increase in attention
from the data mining community because they are inherently easier to understand
and explain than their more complex counterparts. Examples of interpretable
classification models include decision trees, rule sets, and rule lists.
Learning such models often involves optimizing hyperparameters, which typically
requires substantial amounts of data and may result in relatively large models.
In this paper, we consider the problem of learning compact yet accurate
probabilistic rule lists for multiclass classification. Specifically, we
propose a novel formalization based on probabilistic rule lists and the minimum
description length (MDL) principle. This results in virtually parameter-free
model selection that naturally allows to trade-off model complexity with
goodness of fit, by which overfitting and the need for hyperparameter tuning
are effectively avoided. Finally, we introduce the Classy algorithm, which
greedily finds rule lists according to the proposed criterion. We empirically
demonstrate that Classy selects small probabilistic rule lists that outperform
state-of-the-art classifiers when it comes to the combination of predictive
performance and interpretability. We show that Classy is insensitive to its
only parameter, i.e., the candidate set, and that compression on the training
set correlates with classification performance, validating our MDL-based
selection criterion
Rigorously assessing software reliability and safety
This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available
Multicriteria decision-making method for sustainable site location of post-disaster temporary housing in urban areas
Many people lose their homes around the world every year because of natural disasters, such as earthquakes, tsunamis, and hurricanes. In the aftermath of a natural disaster, the displaced people (DP) have to move to temporary housing (TH) and do not have the ability to choose the settlement dimensions, distributions, neighborhood, or other characteristics of their TH. Additionally, post-disaster settlement construction causes neighborhood changes, environmental degradation, and large-scale public expenditures. This paper presents a new model to support decision makers in choosing site locations for TH. The model is capable of determining the optimal site location based on the integration of economic, social, and environmental aspects into the whole life cycle of these houses. The integrated value model for sustainable assessment (MIVES), a multicriteria decision making (MCDM) model, is used to assess the sustainability of the aforementioned aspects, and MIVES includes the value function concept, which permits indicator homogenization by taking into account the satisfaction of the involved stakeholders.Peer ReviewedPostprint (author's final draft
- …