6,751 research outputs found

    First steps towards an imprecise Poisson process

    Get PDF
    The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time

    Attitude toward imprecise information

    Get PDF
    This paper presents an axiomatic model of decision making under uncertainty which incorporates objective but imprecise information. Information is assumed to take the form of a probability-possibility set, that is, a set PP of probability measures on the state space. The decision maker is told that the true probability law lies in PP and is assumed to rank pairs of the form (P,f)(P,f) where ff is an act mapping states into outcomes. The key representation result delivers maxmin expected utility where the min operator ranges over a set of probability priors --just as in the maxmin expected utility (MEU) representation result of \cite{GILB/SCHM/89}. However, unlike the MEU representation, the representation here also delivers a mapping, φ\varphi, which links the probability-possibility set, describing the available information, to the set of revealed priors. The mapping φ\varphi is shown to represent the decision maker's attitude to imprecise information: under our axioms, the set of representation priors is constituted as a selection from the probability-possibility set. This allows both expected utility when the selected set is a singleton and extreme pessimism when the selected set is the same as the probability-possibility set, i.e. , φ\varphi is the identity mapping. We define a notion of comparative imprecision aversion and show it is characterized by inclusion of the sets of revealed probability distributions, irrespective of the utility functions that capture risk attitude. We also identify an explicit attitude toward imprecision that underlies usual hedging axioms. Finally, we characterize, under extra axioms, a more specific functional form, in which the set of selected probability distributions is obtained by (i) solving for the ``mean value'' of the probability-possibility set, and (ii) shrinking the probability-possibility set toward the mean value to a degree determined by preferences.Imprecise information; imprecision aversion; multiple priors; Steiner point

    Bounding inferences for large-scale continuous-time Markov chains : a new approach based on lumping and imprecise Markov chains

    Get PDF
    If the state space of a homogeneous continuous-time Markov chain is too large, making inferences becomes computationally infeasible. Fortunately, the state space of such a chain is usually too detailed for the inferences we are interested in, in the sense that a less detailed—smaller—state space suffices to unambiguously formalise the inference. However, in general this so-called lumped state space inhibits computing exact inferences because the corresponding dynamics are unknown and/or intractable to obtain. We address this issue by considering an imprecise continuous-time Markov chain. In this way, we are able to provide guaranteed lower and upper bounds for the inferences of interest, without suffering from the curse of dimensionality

    An introduction to DSmT

    Get PDF
    The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for dealing with imprecise, uncertain and conflicting sources of information. We focus our presentation on the foundations of DSmT and on its most important rules of combination, rather than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout this presentation to show the efficiency and the generality of this new approach

    First Steps Towards an Imprecise Poisson Process

    Get PDF
    The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time.Comment: Extended pre-print of a paper accepted for presentation at ISIPTA 201

    The Combination of Paradoxical, Uncertain, and Imprecise Sources of Information based on DSmT and Neutro-Fuzzy Inference

    Full text link
    The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this chapter, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach. The last part of this chapter concerns the presentation of the neutrosophic logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and neutrosophic logic are useful tools in decision making after fusioning the information using the DSm hybrid rule of combination of masses.Comment: 20 page

    Accept & Reject Statement-Based Uncertainty Models

    Get PDF
    We develop a framework for modelling and reasoning with uncertainty based on accept and reject statements about gambles. It generalises the frameworks found in the literature based on statements of acceptability, desirability, or favourability and clarifies their relative position. Next to the statement-based formulation, we also provide a translation in terms of preference relations, discuss---as a bridge to existing frameworks---a number of simplified variants, and show the relationship with prevision-based uncertainty models. We furthermore provide an application to modelling symmetry judgements.Comment: 35 pages, 17 figure

    A robust Bayesian analysis of the impact of policy decisions on crop rotations.

    Get PDF
    We analyse the impact of a policy decision on crop rotations, using the imprecise land use model that was developed by the authors in earlier work. A specific challenge in crop rotation models is that farmer’s crop choices are driven by both policy changes and external non-stationary factors, such as rainfall, temperature and agricultural input and output prices. Such dynamics can be modelled by a non-stationary stochastic process, where crop transition probabilities are multinomial logistic functions of such external factors. We use a robust Bayesian approach to estimate the parameters of our model, and validate it by comparing the model response with a non-parametric estimate, as well as by cross validation. Finally, we use the resulting predictions to solve a hypothetical yet realistic policy problem
    • …
    corecore