46 research outputs found

    Symmetry of evidence without evidence of symmetry

    Get PDF
    The de Finetti Theorem is a cornerstone of the Bayesian approach. Bernardo (1996) writes that its "message is very clear: if a sequence of observations is judged to be exchangeable, then any subset of them must be regarded as a random sample from some model, and there exists a prior distribution on the parameter of such model, hence requiring a Bayesian approach." We argue that while exchangeability, interpreted as symmetry of evidence, is a weak assumption, when combined with subjective expected utility theory, it implies also complete confidence that experiments are identical. When evidence is sparse, and there is little evidence of symmetry, this implication of de Finetti's hypotheses is not intuitive. This motivates our adoption of multiple-priors utility as the benchmark model of preference. We provide two alternative generalizations of the de Finetti Theorem for this framework. A model of updating is also provided.Ambiguity, exchangeability, symmetry, updating, learning, multiple-priors

    The Target-Based Utility Model. The role of Copulas and of Non-Additive Measures

    Get PDF
    My studies and my Ph.D. thesis deal with topics that recently emerged in the field of decisions under risk and uncertainty. In particular, I deal with the "target-based approach" to utility theory. A rich literature has been devoted in the last decade to this approach to economic decisions: originally, interest had been focused on the "single-attribute" case and, more recently, extensions to "multi-attribute" case have been studied. This literature is still growing, with a main focus on applied aspects. I will, on the contrary, focus attention on some aspects of theoretical type, related with the multi-attribute case. Various mathematical concepts, such as non-additive measures, aggregation functions, multivariate probability distributions, and notions of stochastic dependence emerge in the formulation and the analysis of target-based models. Notions in the field of non-additive measures and aggregation functions are quite common in the modern economic literature. They have been used to go beyond the classical principle of maximization of expected utility in decision theory. These notions, furthermore, are used in game theory and multi-criteria decision aid. Along my work, on the contrary, I show how non-additive measures and aggregation functions emerge in a natural way in the frame of the target-based approach to classical utility theory, when considering the multi-attribute case. Furthermore they combine with the analysis of multivariate probability distributions and with concepts of stochastic dependence. The concept of copula also constitutes a very important tool for this work, mainly for two purposes. The first one is linked to the analysis of target-based utilities, the other one is in the comparison between classical stochastic order and the concept of "stochastic precedence". This topic finds its application in statistics as well as in the study of Markov Models linked to waiting times to occurrences of words in random sampling of letters from an alphabet. In this work I give a generalization of the concept of stochastic precedence and we discuss its properties on the basis of properties of the connecting copulas of the variables. Along this work I also trace connections to reliability theory, whose aim is studying the lifetime of a system through the analysis of the lifetime of its components. The target-based model finds an application in representing the behavior of the whole system by means of the interaction of its components

    The Target-Based Utility Model. The role of Copulas and of Non-Additive Measures

    Get PDF
    My studies and my Ph.D. thesis deal with topics that recently emerged in the field of decisions under risk and uncertainty. In particular, I deal with the "target-based approach" to utility theory. A rich literature has been devoted in the last decade to this approach to economic decisions: originally, interest had been focused on the "single-attribute" case and, more recently, extensions to "multi-attribute" case have been studied. This literature is still growing, with a main focus on applied aspects. I will, on the contrary, focus attention on some aspects of theoretical type, related with the multi-attribute case. Various mathematical concepts, such as non-additive measures, aggregation functions, multivariate probability distributions, and notions of stochastic dependence emerge in the formulation and the analysis of target-based models. Notions in the field of non-additive measures and aggregation functions are quite common in the modern economic literature. They have been used to go beyond the classical principle of maximization of expected utility in decision theory. These notions, furthermore, are used in game theory and multi-criteria decision aid. Along my work, on the contrary, I show how non-additive measures and aggregation functions emerge in a natural way in the frame of the target-based approach to classical utility theory, when considering the multi-attribute case. Furthermore they combine with the analysis of multivariate probability distributions and with concepts of stochastic dependence. The concept of copula also constitutes a very important tool for this work, mainly for two purposes. The first one is linked to the analysis of target-based utilities, the other one is in the comparison between classical stochastic order and the concept of "stochastic precedence". This topic finds its application in statistics as well as in the study of Markov Models linked to waiting times to occurrences of words in random sampling of letters from an alphabet. In this work I give a generalization of the concept of stochastic precedence and we discuss its properties on the basis of properties of the connecting copulas of the variables. Along this work I also trace connections to reliability theory, whose aim is studying the lifetime of a system through the analysis of the lifetime of its components. The target-based model finds an application in representing the behavior of the whole system by means of the interaction of its components

    Covariance Adjustment in Randomized Experiments and Observational Studies

    Get PDF
    By slightly reframing the concept of covariance adjustment in randomized experiments, a method of exact permutation inference is derived that is entirely free of distributional assumptions and uses the random assignment of treatments as the reasoned basis for inference.\u27\u27 This method of exact permutation inference may be used with many forms of covariance adjustment, including robust regression and locally weighted smoothers. The method is then generalized to observational studies where treatments were not randomly assigned, so that sensitivity to hidden biases must be examined. Adjustments using an instrumental variable are also discussed. The methods are illustrated using data from two observational studies

    Conditional Quantile Processes based on Series or Many Regressors

    Full text link
    Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this paper we develop the nonparametric QR-series framework, covering many regressors as a special case, for performing inference on the entire conditional quantile function and its linear functionals. In this framework, we approximate the entire conditional quantile function by a linear combination of series terms with quantile-specific coefficients and estimate the function-valued coefficients from the data. We develop large sample theory for the QR-series coefficient process, namely we obtain uniform strong approximations to the QR-series coefficient process by conditionally pivotal and Gaussian processes. Based on these strong approximations, or couplings, we develop four resampling methods (pivotal, gradient bootstrap, Gaussian, and weighted bootstrap) that can be used for inference on the entire QR-series coefficient function. We apply these results to obtain estimation and inference methods for linear functionals of the conditional quantile function, such as the conditional quantile function itself, its partial derivatives, average partial derivatives, and conditional average partial derivatives. Specifically, we obtain uniform rates of convergence and show how to use the four resampling methods mentioned above for inference on the functionals. All of the above results are for function-valued parameters, holding uniformly in both the quantile index and the covariate value, and covering the pointwise case as a by-product. We demonstrate the practical utility of these results with an example, where we estimate the price elasticity function and test the Slutsky condition of the individual demand for gasoline, as indexed by the individual unobserved propensity for gasoline consumption.Comment: 131 pages, 2 tables, 4 figure

    Semiparametric and Nonparametric Methods in Econometrics

    Get PDF
    The main objective of this workshop was to bring together mathematical statisticians and econometricians who work in the field of nonparametric and semiparametric statistical methods. Nonparametric and semiparametric methods are active fields of research in econometric theory and are becoming increasingly important in applied econometrics. This is because the flexibility of non- and semiparametric modelling provides important new ways to investigate problems in substantive economics. Moreover, the development of non- and semiparametric methods that are suitable to the needs of economics presents a variety of mathematical challenges. Topics to be addressed in the workshop included nonparametric methods in finance, identification and estimation of nonseparable models, nonparametric estimation under the constraints of economic theory, statistical inverse problems, long-memory time-series, and nonparametric cointegration

    Generalization Bounds: Perspectives from Information Theory and PAC-Bayes

    Full text link
    A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones. Recently, it has garnered increased interest due to its potential applicability for a variety of learning algorithms, including deep neural networks. In parallel, an information-theoretic view of generalization has developed, wherein the relation between generalization and various information measures has been established. This framework is intimately connected to the PAC-Bayesian approach, and a number of results have been independently discovered in both strands. In this monograph, we highlight this strong connection and present a unified treatment of generalization. We present techniques and results that the two perspectives have in common, and discuss the approaches and interpretations that differ. In particular, we demonstrate how many proofs in the area share a modular structure, through which the underlying ideas can be intuited. We pay special attention to the conditional mutual information (CMI) framework; analytical studies of the information complexity of learning algorithms; and the application of the proposed methods to deep learning. This monograph is intended to provide a comprehensive introduction to information-theoretic generalization bounds and their connection to PAC-Bayes, serving as a foundation from which the most recent developments are accessible. It is aimed broadly towards researchers with an interest in generalization and theoretical machine learning.Comment: 222 page

    The Econometrics of Unobservables

    Get PDF
    corecore