1,044 research outputs found

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    The LP/POMDP marriage: Optimization with imperfect information

    Get PDF

    From Wald to Savage: homo economicus becomes a Bayesian statistician

    Get PDF
    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Wald’s behaviorist approach to statistics and culminates with Leonard J. Savage’s elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latter’s acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics

    Alternation bias and reduction in St. Petersburg gambles:an experimental investigation

    Get PDF
    Reduction of compound lotteries is implicit both in the statement of the St. Petersburg Paradox and in its resolution by Expected Utility (EU).We report three real-money choice experiments between truncated compound-form St. Petersburg gambles and their reduced-form equivalents. The first tests for differences in elicited Certainty Equivalents. The second develops the distinction between ‘weak-form’ and ‘strong-form’ rejection of Reduction, as well as a novel experimental task that verifiably implements Vernon Smith’s dominance precept. The third experiment checks for robustness against range and increment manipulation. In all three experiments the null hypothesis of Reduction is rejected, with systematic deprecation of the compound form in favor of the reduced form. This is consistent with the predictions of alternation bias. Together these experiments offer evidence that the Reduction assumption may have limited descriptive validity in modelling St. Petersburg gambles, whether by EU or non-EU theories

    Applications of Expert Systems in Transport

    Get PDF
    BACKGROUND Experienced judgement and specialist knowledge are essential to the proper specification, understanding and interpretation of data and computer analyses. The human expert has traditionally supplied this knowledge and judgement with the computer doing the necessary number-crunching. However, artificial intelligence (AI) research provides ways of embodying this knowledge and judgement within computer programs. Despite an early lead in the field, UK research and developmnent into AI techniques was held back in the 1970s when the then Science Research Council took the view that the 'combinatorial explosion' of possibilities would be an insurmountable obstacle to AI developent. But in America and Japan research continued, and the surge of interest in the 1980s has been a consequence of the 'Fifth Generation Computer' research programme initiated by Japan (Feigenbaum and McCorduck; 1984). This led in Europe to the ESPRIT programme of advanced technology research, and in the UK to the Alvey programme (Department of Industry, 1982). As a result, all sectors of industry have been encouraged to consider how such advanced technology can be applied, and the transport industry is no exception. This paper sets out to explain some of the relevant techniques in simple terms, and to describe a number of situations in which transport planning and operations might be helped through their use, illustrating this by reference to the pioneering work going on in transport applications in the USA, Britain and Australia

    How Large are the Classification Errors in the Social Security Disability Award Process?

    Get PDF
    This paper presents an .audit. of the multistage application and appeal process that the U.S. Social Security Administration (SSA) uses to determine eligibility for disability benefits from the Disability Insurance (DI) and Supplemental Security Income (SSI) programs. We use a subset of individuals from the Health and Retirement Study who applied for DI or SSI benefits between 1992 and 1996, to estimate classification error rates under the hypothesis that applicants' self-reported disability status and the SSA's ultimate award decision are noisy but unbiased indicators of a latent .true disability status. indicator. We find that approximately 20% of SSI/DI applicants who are ultimately awarded benefits are not disabled, and that 60% of applicants who were denied benefits are disabled. We also construct an optimal statistical screening rule that results in significantly lower classification error rates than does SSA's current award process.Social Security Disability Insurance, Supplemental Security Income, Health and Retirement Study, Classification Errors.

    How Large are the Classification Errors in the Social Security Disability Award Process?

    Get PDF
    This paper presents an audit' of the multistage application and appeal process that the U.S. Social Security Administration (SSA) uses to determine eligibility for disability benefits from the Disability Insurance (DI) and Supplemental Security Income (SSI) programs. We study a subset of individuals from the Health and Retirement Study (HRS) who applied for DI or SSI benefits between 1992 and 1996. We compare the SSA's ultimate award decision (i.e. after allowing for appeals) to the applicant's self-reported disability status. We use these data to estimate classification error rates under the hypothesis that applicants' self-reported disability status and the SSA's ultimate award decision are noisy but unbiased indicators of, a latent true disability status' indicator. We find that approximately 20% of SSI/DI applicants who are ultimately awarded benefits are not disabled, and that 60% of applicants who were denied benefits are disabled. Our analysis also yields insights into the patterns of self-selection induced by varying delays and award probabilities at various levels of the application and appeal process. We construct an optimal statistical screening rule using a subset of objective health indicators that the SSA uses in making award decisions that results in significantly lower classification error rates than does SSA's current award process.

    Bayesian Ideas in Survey Sampling: The Legacy of Basu

    Get PDF
    Survey sampling and, more generally, Official Statistics are experiencing an important renovation time. On one hand, there is the need to exploit the huge information potentiality that the digital revolution made available in terms of data. On the other hand, this process occurred simultaneously with a progressive deterioration of the quality of classical sample surveys, due to a decreasing willingness to participate and an increasing rate of missing responses. The switch from survey-based inference to a hybrid system involv- ing register-based information has made more stringent the debate and the possible resolution of the design-based versus model-based approaches con- troversy. In this new framework, the use of statistical models seems unavoid- able and it is today a relevant part of the official statistician toolkit. Models are important in several different contexts, from Small area estimation to non sampling error adjustment, but they are also crucial for correcting bias due to over and undercoverage of administrative data, in order to prevent potential selection bias, and to deal with different definitions and/or errors in the measurement process of the administrative sources. The progressive shift from a design-based to a model-based approach in terms of super-population is a matter of fact in the practice of the National Statistical Institutes. How- ever, the introduction of Bayesian ideas in official statistics still encounters difficulties and resistance. In this work, we attempt a non-systematic review of the Bayesian development in this area and try to highlight the extra ben- efit that a Bayesian approach might provide. Our general conclusion is that, while the general picture is today clear and most of the basic topics of survey sampling can be easily rephrased and tackled from a Bayesian perspective, much work is still necessary for the availability of a ready-to-use platform of Bayesian survey sampling in the presence of complex sampling design, non-ignorable missing data patterns, and large datasets
    • 

    corecore