93 research outputs found

    Evidence functions: a compositional approach to information

    Get PDF
    The discrete case of Bayes’ formula is considered the paradigm of information acquisition. Prior and posterior probability functions, as well as likelihood functions, called evidence functions, are compositions following the Aitchison geometry of the simplex, and have thus vector character. Bayes’ formula becomes a vector addition. The Aitchison norm of an evidence function is introduced as a scalar measurement of information. A fictitious fire scenario serves as illustration. Two different inspections of affected houses are considered. Two questions are addressed: (a) which is the information provided by the outcomes of inspections, and (b) which is the most informative inspection.Peer Reviewe

    Evidence functions: a compositional approach to information

    Get PDF
    The discrete case of Bayes’ formula is considered the paradigm of information acquisition. Prior and posterior probability functions, as well as likelihood functions, called evidence functions, are compositions following the Aitchison geometry of the simplex, and have thus vector character. Bayes’ formula becomes a vector addition. The Aitchison norm of an evidence function is introduced as a scalar measurement of information. A fictitious fire scenario serves as illustration. Two different inspections of affected houses are considered. Two questions are addressed: (a) which is the information provided by the outcomes of inspections, and (b) which is the most informative inspection.Peer ReviewedPostprint (author's final draft

    Evidence information in Bayesian updating

    Get PDF
    Bayes theorem (discrete case) is taken as a paradigm of information acquisition. As men-tioned by Aitchison, Bayes formula can be identiïŹed with perturbation of a prior probability vector and a discrete likelihood function, both vectors being compositional. Considering prior, poste-rior and likelihood as elements of the simplex, a natural choice of distance between them is the Aitchison distance. Other geometrical features can also be considered using the Aitchison geom-etry. For instance, orthogonality in the simplex allows to think of orthogonal information, or the perturbation-diïŹ€erence to think of opposite information. The Aitchison norm provides a size of compositional vectors, and is thus a natural scalar measure of the information conveyed by the likelihood or captured by a prior or a posterior. It is called evidence information, or e-information for short. In order to support such e-information theory some principles of e-information are discussed. They essentially coincide with those of compositional data analysis. Also, a comparison of these principles of e-information with the axiomatic Shannon-information theory is performed. Shannon-information and developments thereof do not satisfy scale invariance and also violate subcomposi-tional coherence. In general, Shannon-information theory follows the philosophy of amalgamation when relating information given by an evidence-vector and some sub-vector, while the dimension reduction for the proposed e-information corresponds to orthogonal projections in the simplex. The result of this preliminary study is a set of properties of e-information that may constitute the basis of an axiomatic theory. A synthetic example is used to motivate the ideas and the subsequent discussion

    Wave-height hazard analysis in Eastern Coast of Spain : Bayesian approach using generalized Pareto distribution

    Get PDF
    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.

    The normal distribution in some constrained sample spaces

    Get PDF
    Phenomena with a constrained sample space appear frequently in practice. This is the case, for example, with strictly positive data, or with compositional data, such as percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models that better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated.Peer Reviewe

    The normal distribution in some constrained sample spaces

    Get PDF
    Phenomena with a constrained sample space appear frequently in practice. This is the case, for example, with strictly positive data, or with compositional data, such as percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models that better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated

    Evidence functions : a compositional approach to information

    Get PDF
    The discrete case of Bayes' formula is considered the paradigm of information acquisition. Prior and posterior probability functions, as well as likelihood functions, called evidence functions, are compositions following the Aitchison geometry of the simplex, and have thus vector character. Bayes' formula becomes a vector addition. The Aitchison norm of an evidence function is introduced as a scalar measurement of information. A fictitious fire scenario serves as illustration. Two different inspections of affected houses are considered. Two questions are addressed: (a) which is the information provided by the outcomes of inspections, and (b) which is the most informative inspection

    Principal balances

    Get PDF
    Principal balances are de ned as a sequence of orthonormal balances which maximize successively the explained variance in a data set. Apparently, computing principal balances requires an exhaustive search along all possible sets of orthogonal balances. This is una ordable for even a small number of parts. Three suboptimal, but feasible, alternatives are explored. The approach is illustrated using a data-set of geochemical composition of glacial sediments

    Principal balances

    Get PDF
    Principal balances are defined as a sequence of orthonormal balances which maximize successively the explained variance in a data set. Apparently, computing principal balances requires an exhaustive search along all possible sets of orthogonal balances. This is unaffordable for even a small number of parts. Three suboptimal, but feasible, alternatives are explored. The approach is illustrated using a data-set of geochemical composition of glacial sediments.Peer ReviewedPostprint (published version
    • 

    corecore