54 research outputs found

    The Epistemic Value of Conceptualizing the Possible

    Get PDF

    Mislearning from Censored Data: The Gambler's Fallacy in Optimal-Stopping Problems

    Full text link
    I study endogenous learning dynamics for people expecting systematic reversals from random sequences - the "gambler's fallacy." Biased agents face an optimal-stopping problem. They are uncertain about the underlying distribution and learn its parameters from predecessors. Agents stop when early draws are "good enough," so predecessors' experience contain negative streaks but not positive streaks. Since biased agents understate the likelihood of consecutive below-average draws, society converges to over-pessimistic beliefs about the distribution's mean and stops too early. Agents uncertain about the distribution's variance overestimate it to an extent that depends on predecessors' stopping thresholds. Subsidizing search partially mitigates long-run belief distortions

    Validating the predictions of case-based decision theory

    Get PDF
    Real-life decision-makers typically do not know all possible outcomes arising from alternative courses of action. Instead, when people face a problem, they may rely on the recollection of their past personal experience: the situation, the action taken, and the accompanying consequence. In addition, the applicability of a past experience in decision-making may depend on how similar the current problem is to situations encountered previously. Case-based decision theory (CBDT), proposed by Itzhak Gilboa and David Schmeidler (1995), formalises this type of analogical reasoning. While CBDT is intuitively appealing, only a few experimental and empirical studies have attempted to validate its predictions. This thesis reports two laboratory experiments and an empirical study that attempt to confirm the predictive power of CBDT vis-à-vis Bayesian reasoning

    Popper's Severity of Test

    Full text link

    Homeostatic epistemology : reliability, coherence and coordination in a Bayesian virtue epistemology

    Get PDF
    How do agents with limited cognitive capacities flourish in informationally impoverished or unexpected circumstances? Aristotle argued that human flourishing emerged from knowing about the world and our place within it. If he is right, then the virtuous processes that produce knowledge, best explain flourishing. Influenced by Aristotle, virtue epistemology defends an analysis of knowledge where beliefs are evaluated for their truth and the intellectual virtue or competences relied on in their creation. However, human flourishing may emerge from how degrees of ignorance are managed in an uncertain world. Perhaps decision-making in the shadow of knowledge best explains human wellbeing—a Bayesian approach? In this dissertation I argue that a hybrid of virtue and Bayesian epistemologies explains human flourishing—what I term homeostatic epistemology. \ud \ud Homeostatic epistemology supposes that an agent has a rational credence p when p is the product of reliable processes aligned with the norms of probability theory; whereas an agent knows that p when a rational credence p is the product of reliable processes such that: 1) p meets some relevant threshold for belief (such that the agent acts as though p were true and indeed p is true), 2) p coheres with a satisficing set of relevant beliefs and, 3) the relevant set of beliefs is coordinated appropriately to meet the integrated aims of the agent. \ud \ud Homeostatic epistemology recognizes that justificatory relationships between beliefs are constantly changing to combat uncertainties and to take advantage of predictable circumstances. Contrary to holism, justification is built up and broken down across limited sets like the anabolic and catabolic processes that maintain homeostasis in the cells, organs and systems of the body. It is the coordination of choristic sets of reliably produced beliefs that create the greatest flourishing given the limitations inherent in the situated agent. \u

    A critical analysis of the role of statistical significance testing in education research: With special attention to mathematics education

    Get PDF
    This study analyzes the role of statistical significance testing (SST) in education. Although the basic logic underlying SST 一 a hypothesis is rejected because the observed data would be very unlikely if the hypothesis is true 一 appears so obvious that many people are tempted to accept it, it is in fact fallacious. In the light of its historical background and conceptual development, discussed in Chapter 2, the Fisher’s significance testing, Neyman-Pearson hypothesis testing and their hybrids are clearly distinguished. We argue that the probability of obtaining the observed or more extreme outcomes (p value) can hardly act as a measure of the strength of evidence against the null hypothesis. After discussing the five major interpretations of probability, we conclude that if we do not accept the subjective theory of probability, talking about the probability of a hypothesis that is not the outcome of a chance process is unintelligible. But the subjective theory itself has many intractable difficulties that can hardly be resolved. If we insist on assigning a probability value to a hypothesis in the same way as we assign one to a chance event, we have to accept that it is the hypothesis with low probability, rather than high probability, that we should aim at when conducting scientific research. More important, the inferences behind SST are shown to be fallacious from three different perspectives. The attempt to invoke the likelihood ratio with the observed or more extreme data instead of the probability of a hypothesis in defending the use of р value as a measure of the strength of evidence against the null hypothesis is also shown to be misleading because it can be demonstrated that the use of tail region to represent a result that is actually on the border would overstate the evidence against the ทน11 hypothesis.Although Neyman-Pearson hypothesis testing does not involve the concept of the probability of a hypothesis, it does have some other serious problems that can hardly be resolved. We show that it cannot address researchers' genuine concerns. By explaining why the level of significance must be specified or fixed prior to the analysis of data and why a blurring of the distinction between the р value and the significance level would lead to undesirable consequences, we conclude that the Neyman-Pearson hypothesis testing cannot provide an effective means for rejecting false hypotheses. After a thorough discussion of common misconceptions associated with SST and the major arguments for and against SST, we conclude that SST has insurmountable problems that could misguide the research paradigm although some other criticisms on SST are not really as justified. We also analyze various proposed alternatives to SST and conclude that confidence intervals (CIs) are no better than SST for the purpose of testing hypotheses and it is unreasonable to expect the existence of a statistical test that could provide researchers with algorithms or rigid rules by conforming to which all problems about testing hypotheses could be solved. Finally, we argue that falsificationism could eschew the disadvantages of SST and other similar statistical inductive inferences and we discuss how it could bring education research into a more fruitful situation in which to their practices. Although we pay special attention to mathematics education, the core of the discussion in the thesis might apply equally to other educational contexts

    General Course Catalog [July-December 2020]

    Get PDF
    Undergraduate Course Catalog, July-December 2020https://repository.stcloudstate.edu/undergencat/1132/thumbnail.jp
    corecore