9,482 research outputs found

    The Search for Certainty: a critical assessment

    Full text link
    The Search for Certainty was published in 2009 by Krzysztof Burdzy. It examines the "philosophical duopoly" of von Mises and de Finetti at the foundation of probability and statistics and find this duopoly missing. This review exposes the weakness of the arguments presented in the book, it questions the relevance of introducing a new set of probability axioms from a methodological perspective, and it concludes at the lack of impact of this book on statistical foundations and practice.Comment: This is a revision of a book review of K. Burdzy's "The Search for Certainty", revision that is submitted to Bayesian Analysi

    Evaluation of e-learning web sites using fuzzy axiomatic design based approach

    Get PDF
    High quality web site has been generally recognized as a critical enabler to conduct online business. Numerous studies exist in the literature to measure the business performance in relation to web site quality. In this paper, an axiomatic design based approach for fuzzy group decision making is adopted to evaluate the quality of e-learning web sites. Another multi-criteria decision making technique, namely fuzzy TOPSIS, is applied in order to validate the outcome. The methodology proposed in this paper has the advantage of incorporating requirements and enabling reductions in the problem size, as compared to fuzzy TOPSIS. A case study focusing on Turkish e-learning websites is presented, and based on the empirical findings, managerial implications and recommendations for future research are offered

    Combinatorial Information Theory: I. Philosophical Basis of Cross-Entropy and Entropy

    Full text link
    This study critically analyses the information-theoretic, axiomatic and combinatorial philosophical bases of the entropy and cross-entropy concepts. The combinatorial basis is shown to be the most fundamental (most primitive) of these three bases, since it gives (i) a derivation for the Kullback-Leibler cross-entropy and Shannon entropy functions, as simplified forms of the multinomial distribution subject to the Stirling approximation; (ii) an explanation for the need to maximize entropy (or minimize cross-entropy) to find the most probable realization; and (iii) new, generalized definitions of entropy and cross-entropy - supersets of the Boltzmann principle - applicable to non-multinomial systems. The combinatorial basis is therefore of much broader scope, with far greater power of application, than the information-theoretic and axiomatic bases. The generalized definitions underpin a new discipline of ``{\it combinatorial information theory}'', for the analysis of probabilistic systems of any type. Jaynes' generic formulation of statistical mechanics for multinomial systems is re-examined in light of the combinatorial approach. (abbreviated abstract)Comment: 45 pp; 1 figure; REVTex; updated version 5 (incremental changes

    Probabilities in Economic Modeling

    Get PDF
    Economic modeling assumes, for the most part, that agents are Bayesian, that is, that they entertain probabilistic beliefs, objective or subjective, regarding any event in question. We argue that the formation of such beliefs calls for a deeper examination and for explicit modeling. Models of belief formation may enhance our understanding of the probabilistic beliefs when these exist, and may also help up characterize situations in which entertaining such beliefs is neither realistic nor necessarily rational.Decision making, Bayesian, Behavioral Economics

    Mean-Variance and Expected Utility: The Borch Paradox

    Get PDF
    The model of rational decision-making in most of economics and statistics is expected utility theory (EU) axiomatised by von Neumann and Morgenstern, Savage and others. This is less the case, however, in financial economics and mathematical finance, where investment decisions are commonly based on the methods of mean-variance (MV) introduced in the 1950s by Markowitz. Under the MV framework, each available investment opportunity ("asset") or portfolio is represented in just two dimensions by the ex ante mean and standard deviation (ÎŒ,σ)(\mu,\sigma) of the financial return anticipated from that investment. Utility adherents consider that in general MV methods are logically incoherent. Most famously, Norwegian insurance theorist Borch presented a proof suggesting that two-dimensional MV indifference curves cannot represent the preferences of a rational investor (he claimed that MV indifference curves "do not exist"). This is known as Borch's paradox and gave rise to an important but generally little-known philosophical literature relating MV to EU. We examine the main early contributions to this literature, focussing on Borch's logic and the arguments by which it has been set aside.Comment: Published in at http://dx.doi.org/10.1214/12-STS408 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The shape of incomplete preferences

    Full text link
    Incomplete preferences provide the epistemic foundation for models of imprecise subjective probabilities and utilities that are used in robust Bayesian analysis and in theories of bounded rationality. This paper presents a simple axiomatization of incomplete preferences and characterizes the shape of their representing sets of probabilities and utilities. Deletion of the completeness assumption from the axiom system of Anscombe and Aumann yields preferences represented by a convex set of state-dependent expected utilities, of which at least one must be a probability/utility pair. A strengthening of the state-independence axiom is needed to obtain a representation purely in terms of a set of probability/utility pairs.Comment: Published at http://dx.doi.org/10.1214/009053606000000740 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    From Wald to Savage: homo economicus becomes a Bayesian statistician

    Get PDF
    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Wald’s behaviorist approach to statistics and culminates with Leonard J. Savage’s elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latter’s acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics

    Objective probability and quantum fuzziness

    Full text link
    This paper offers a critique of the Bayesian interpretation of quantum mechanics with particular focus on a paper by Caves, Fuchs, and Schack containing a critique of the "objective preparations view" or OPV. It also aims to carry the discussion beyond the hardened positions of Bayesians and proponents of the OPV. Several claims made by Caves et al. are rebutted, including the claim that different pure states may legitimately be assigned to the same system at the same time, and the claim that the quantum nature of a preparation device cannot legitimately be ignored. Both Bayesians and proponents of the OPV regard the time dependence of a quantum state as the continuous dependence on time of an evolving state of some kind. This leads to a false dilemma: quantum states are either objective states of nature or subjective states of belief. In reality they are neither. The present paper views the aforesaid dependence as a dependence on the time of the measurement to whose possible outcomes the quantum state serves to assign probabilities. This makes it possible to recognize the full implications of the only testable feature of the theory, viz., the probabilities it assigns to measurement outcomes...Comment: 21 pages, no graphics, inspired by "Subjective probability and quantum certainty" (quant-ph/0608190 v2
    • 

    corecore