27,983 research outputs found

    Rationality and dynamic consistency under risk and uncertainty

    Get PDF
    For choice with deterministic consequences, the standard rationality hypothesis is ordinality - i.e., maximization of a weak preference ordering. For choice under risk (resp. uncertainty), preferences are assumed to be represented by the objectively (resp. subjectively) expected value of a von Neumann{Morgenstern utility function. For choice under risk, this implies a key independence axiom; under uncertainty, it implies some version of Savage's sure thing principle. This chapter investigates the extent to which ordinality, independence, and the sure thing principle can be derived from more fundamental axioms concerning behaviour in decision trees. Following Cubitt (1996), these principles include dynamic consistency, separability, and reduction of sequential choice, which can be derived in turn from one consequentialist hypothesis applied to continuation subtrees as well as entire decision trees. Examples of behavior violating these principles are also reviewed, as are possible explanations of why such violations are often observed in experiments

    Quantitative information flow under generic leakage functions and adaptive adversaries

    Full text link
    We put forward a model of action-based randomization mechanisms to analyse quantitative information flow (QIF) under generic leakage functions, and under possibly adaptive adversaries. This model subsumes many of the QIF models proposed so far. Our main contributions include the following: (1) we identify mild general conditions on the leakage function under which it is possible to derive general and significant results on adaptive QIF; (2) we contrast the efficiency of adaptive and non-adaptive strategies, showing that the latter are as efficient as the former in terms of length up to an expansion factor bounded by the number of available actions; (3) we show that the maximum information leakage over strategies, given a finite time horizon, can be expressed in terms of a Bellman equation. This can be used to compute an optimal finite strategy recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title appeared in Proc. of FORTE 2014, LNC

    On the usage of the probability integral transform to reduce the complexity of multi-way fuzzy decision trees in Big Data classification problems

    Full text link
    We present a new distributed fuzzy partitioning method to reduce the complexity of multi-way fuzzy decision trees in Big Data classification problems. The proposed algorithm builds a fixed number of fuzzy sets for all variables and adjusts their shape and position to the real distribution of training data. A two-step process is applied : 1) transformation of the original distribution into a standard uniform distribution by means of the probability integral transform. Since the original distribution is generally unknown, the cumulative distribution function is approximated by computing the q-quantiles of the training set; 2) construction of a Ruspini strong fuzzy partition in the transformed attribute space using a fixed number of equally distributed triangular membership functions. Despite the aforementioned transformation, the definition of every fuzzy set in the original space can be recovered by applying the inverse cumulative distribution function (also known as quantile function). The experimental results reveal that the proposed methodology allows the state-of-the-art multi-way fuzzy decision tree (FMDT) induction algorithm to maintain classification accuracy with up to 6 million fewer leaves.Comment: Appeared in 2018 IEEE International Congress on Big Data (BigData Congress). arXiv admin note: text overlap with arXiv:1902.0935

    Decision making with decision event graphs

    Get PDF
    We introduce a new modelling representation, the Decision Event Graph (DEG), for asymmetric multistage decision problems. The DEG explicitly encodes conditional independences and has additional significant advantages over other representations of asymmetric decision problems. The colouring of edges makes it possible to identify conditional independences on decision trees, and these coloured trees serve as a basis for the construction of the DEG. We provide an efficient backward-induction algorithm for finding optimal decision rules on DEGs, and work through an example showing the efficacy of these graphs. Simplifications of the topology of a DEG admit analogues to the sufficiency principle and barren node deletion steps used with influence diagrams

    A Consensus-ADMM Approach for Strategic Generation Investment in Electricity Markets

    Get PDF
    This paper addresses a multi-stage generation investment problem for a strategic (price-maker) power producer in electricity markets. This problem is exposed to different sources of uncertainty, including short-term operational (e.g., rivals' offering strategies) and long-term macro (e.g., demand growth) uncertainties. This problem is formulated as a stochastic bilevel optimization problem, which eventually recasts as a large-scale stochastic mixed-integer linear programming (MILP) problem with limited computational tractability. To cope with computational issues, we propose a consensus version of alternating direction method of multipliers (ADMM), which decomposes the original problem by both short- and long-term scenarios. Although the convergence of ADMM to the global solution cannot be generally guaranteed for MILP problems, we introduce two bounds on the optimal solution, allowing for the evaluation of the solution quality over iterations. Our numerical findings show that there is a trade-off between computational time and solution quality

    On the automated extraction of regression knowledge from databases

    Get PDF
    The advent of inexpensive, powerful computing systems, together with the increasing amount of available data, conforms one of the greatest challenges for next-century information science. Since it is apparent that much future analysis will be done automatically, a good deal of attention has been paid recently to the implementation of ideas and/or the adaptation of systems originally developed in machine learning and other computer science areas. This interest seems to stem from both the suspicion that traditional techniques are not well-suited for large-scale automation and the success of new algorithmic concepts in difficult optimization problems. In this paper, I discuss a number of issues concerning the automated extraction of regression knowledge from databases. By regression knowledge is meant quantitative knowledge about the relationship between a vector of predictors or independent variables (x) and a scalar response or dependent variable (y). A number of difficulties found in some well-known tools are pointed out, and a flexible framework avoiding many such difficulties is described and advocated. Basic features of a new tool pursuing this direction are reviewed
    corecore