40,071 research outputs found

    Typicality, graded membership, and vagueness

    Get PDF
    This paper addresses theoretical problems arising from the vagueness of language terms, and intuitions of the vagueness of the concepts to which they refer. It is argued that the central intuitions of prototype theory are sufficient to account for both typicality phenomena and psychological intuitions about degrees of membership in vaguely defined classes. The first section explains the importance of the relation between degrees of membership and typicality (or goodness of example) in conceptual categorization. The second and third section address arguments advanced by Osherson and Smith (1997), and Kamp and Partee (1995), that the two notions of degree of membership and typicality must relate to fundamentally different aspects of conceptual representations. A version of prototype theory—the Threshold Model—is proposed to counter these arguments and three possible solutions to the problems of logical selfcontradiction and tautology for vague categorizations are outlined. In the final section graded membership is related to the social construction of conceptual boundaries maintained through language use

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Scientific Models of Human Health Risk Analysis in Legal and Policy Decisions

    Get PDF
    The quality of scientific predictions of risk in the courtroom and policy arena rests in large measure on how the two differences between normal practice and the legal/policy practice of science are reconciled. This article considers a variety of issues that arise in reconciling these differences, and the problems that remain with scientific estimates of risk when these are used in decisions

    There Is No Pure Empirical Reasoning

    Get PDF
    The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism

    Information-Based Physics: An Observer-Centric Foundation

    Full text link
    It is generally believed that physical laws, reflecting an inherent order in the universe, are ordained by nature. However, in modern physics the observer plays a central role raising questions about how an observer-centric physics can result in laws apparently worthy of a universal nature-centric physics. Over the last decade, we have found that the consistent apt quantification of algebraic and order-theoretic structures results in calculi that possess constraint equations taking the form of what are often considered to be physical laws. I review recent derivations of the formal relations among relevant variables central to special relativity, probability theory and quantum mechanics in this context by considering a problem where two observers form consistent descriptions of and make optimal inferences about a free particle that simply influences them. I show that this approach to describing such a particle based only on available information leads to the mathematics of relativistic quantum mechanics as well as a description of a free particle that reproduces many of the basic properties of a fermion. The result is an approach to foundational physics where laws derive from both consistent descriptions and optimal information-based inferences made by embedded observers.Comment: To be published in Contemporary Physics. The manuscript consists of 43 pages and 9 Figure

    Anticipation and Risk – From the inverse problem to reverse computation

    Get PDF
    Abstract. Risk assessment is relevant only if it has predictive relevance. In this sense, the anticipatory perspective has yet to contribute to more adequate predictions. For purely physics-based phenomena, predictions are as good as the science describing such phenomena. For the dynamics of the living, the physics of the matter making up the living is only a partial description of their change over time. The space of possibilities is the missing component, complementary to physics and its associated predictions based on probabilistic methods. The inverse modeling problem, and moreover the reverse computation model guide anticipatory-based predictive methodologies. An experimental setting for the quantification of anticipation is advanced and structural measurement is suggested as a possible mathematics for anticipation-based risk assessment

    Frequentist statistics as a theory of inductive inference

    Full text link
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of pp-values rather than as formal procedures for ``acceptance'' and ``rejection.'' A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of pp-values in the specific circumstances of each application, as contrasted with a long-run interpretation. A variety of more complicated situations are discussed in which modification of the simple pp-value may be essential.Comment: Published at http://dx.doi.org/10.1214/074921706000000400 in the IMS Lecture Notes--Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore