4,465 research outputs found

    Feature integration in natural language concepts

    Get PDF
    Two experiments measured the joint influence of three key sets of semantic features on the frequency with which artifacts (Experiment 1) or plants and creatures (Experiment 2) were categorized in familiar categories. For artifacts, current function outweighed both originally intended function and current appearance. For biological kinds, appearance and behavior, an inner biological function, and appearance and behavior of offspring all had similarly strong effects on categorization. The data were analyzed to determine whether an independent cue model or an interactive model best accounted for how the effects of the three feature sets combined. Feature integration was found to be additive for artifacts but interactive for biological kinds. In keeping with this, membership in contrasting artifact categories tended to be superadditive, indicating overlapping categories, whereas for biological kinds, it was subadditive, indicating conceptual gaps between categories. It is argued that the results underline a key domain difference between artifact and biological concepts

    Policymaking under scientific uncertainty

    Get PDF
    Policymakers who seek to make scientifically informed decisions are constantly confronted by scientific uncertainty and expert disagreement. This thesis asks: how can policymakers rationally respond to expert disagreement and scientific uncertainty? This is a work of nonideal theory, which applies formal philosophical tools developed by ideal theorists to more realistic cases of policymaking under scientific uncertainty. I start with Bayesian approaches to expert testimony and the problem of expert disagreement, arguing that two popular approachesā€” supra-Bayesianism and the standard model of expert deferenceā€”are insufficient. I develop a novel model of expert deference and show how it can deal with many of these problems raised for them. I then turn to opinion pooling, a popular method for dealing with disagreement. I show that various theoretical motivations for pooling functions are irrelevant to realistic policymaking cases. This leads to a cautious recommendation of linear pooling. However, I then show that any pooling method relies on value judgements, that are hidden in the selection of the scoring rule. My focus then narrows to a more specific case of scientific uncertainty: multiple models of the same system. I introduce a particular case study involving hurricane models developed to support insurance decision-making. I recapitulate my analysis of opinion pooling in the context of model ensembles, confirming that my hesitations apply. This motivates a shift of perspective, to viewing the problem as a decision theoretic one. I rework a recently developed ambiguity theory, called the confidence approach, to take input from model ensembles. I show how it facilitates the resolution of the policymakerā€™s problem in a way that avoids the issues encountered in previous chapters. This concludes my main study of the problem of expert disagreement. In the final chapter, I turn to methodological reflection. I argue that philosophers who employ the mathematical methods of the prior chapters are modelling. Employing results from the philosophy of scientific models, I develop the theory of normative modelling. I argue that it has important methodological conclusions for the practice of formal epistemology, ruling out popular moves such as searching for counterexamples

    Proceedings of the Third International Workshop on Management of Uncertain Data (MUD2009)

    Get PDF

    Vive la DiffƩrence? Structural Diversity as a Challenge for Metanormative Theories

    Get PDF
    Decision-making under normative uncertainty requires an agent to aggregate the assessments of options given by rival normative theories into a single assessment that tells her what to do in light of her uncertainty. But what if the assessments of rival theories differ not just in their content but in their structure -- e.g., some are merely ordinal while others are cardinal? This paper describes and evaluates three general approaches to this "problem of structural diversity": structural enrichment, structural depletion, and multi-stage aggregation. All three approaches have notable drawbacks, but I tentatively defend multi-stage aggregation as least bad of the three

    Some contributions to decision making in complex information settings with imprecise probabilities and incomplete preferences

    Get PDF

    Scientiļ¬c uncertainty and decision making

    Get PDF
    It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, ļ¬‚ood defenses must be designed before we know the future distribution of ļ¬‚ood events. It is standardly assumed that probability theory oļ¬€ers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I oļ¬€er several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Leviā€™s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably diļ¬ƒcult. I ļ¬nish with a case study of scientiļ¬c uncertainty. Climate modellers attempt to oļ¬€er probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities oļ¬€ered really do reļ¬‚ect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientiļ¬c uncertainty is multi-dimensional, and diļ¬ƒcult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertaint

    The study of probability model for compound similarity searching

    Get PDF
    Information Retrieval or IR system main task is to retrieve relevant documents according to the users query. One of IR most popular retrieval model is the Vector Space Model. This model assumes relevance based on similarity, which is defined as the distance between query and document in the concept space. All currently existing chemical compound database systems have adapt the vector space model to calculate the similarity of a database entry to a query compound. However, it assumes that fragments represented by the bits are independent of one another, which is not necessarily true. Hence, the possibility of applying another IR model is explored, which is the Probabilistic Model, for chemical compound searching. This model estimates the probabilities of a chemical structure to have the same bioactivity as a target compound. It is envisioned that by ranking chemical structures in decreasing order of their probability of relevance to the query structure, the effectiveness of a molecular similarity searching system can be increased. Both fragment dependencies and independencies assumption are taken into consideration in achieving improvement towards compound similarity searching system. After conducting a series of simulated similarity searching, it is concluded that PM approaches really did perform better than the existing similarity searching. It gave better result in all evaluation criteria to confirm this statement. In terms of which probability model performs better, the BD model shown improvement over the BIR model

    Fuzzy Human Reliability Analysis: Applications and Contributions Review

    Get PDF
    The applications and contributions of fuzzy set theory to human reliability analysis (HRA) are reassessed. The main contribution of fuzzy mathematics relies on its ability to represent vague information. Many HRA authors have made contributions developing new models, introducing fuzzy quantification methodologies. Conversely, others have drawn on fuzzy techniques or methodologies for quantifying already existing models. Fuzzy contributions improve HRA in five main aspects: (1) uncertainty treatment, (2) expert judgment data treatment, (3) fuzzy fault trees, (4) performance shaping factors, and (5) human behaviour model. Finally, recent fuzzy applications and new trends in fuzzy HRA are herein discussed
    • ā€¦
    corecore