1,327 research outputs found

    How do medical researchers make causal inferences?

    Get PDF
    Bradford Hill (1965) highlighted nine aspects of the complex evidential situation a medical researcher faces when determining whether a causal relation exists between a disease and various conditions associated with it. These aspects are widely cited in the literature on epidemiological inference as justifying an inference to a causal claim, but the epistemological basis of the Hill aspects is not understood. We offer an explanatory coherentist interpretation, explicated by Thagard's ECHO model of explanatory coherence. The ECHO model captures the complexity of epidemiological inference and provides a tractable model for inferring disease causation. We apply this model to three cases: the inference of a causal connection between the Zika virus and birth defects, the classic inference that smoking causes cancer, and John Snow’s inference about the cause of cholera

    Eighty phenomena about the self: representation, evaluation, regulation, and change

    Get PDF
    We propose a new approach for examining self-related aspects and phenomena. The approach includes (1) a taxonomy and (2) an emphasis on multiple levels of mechanisms. The taxonomy categorizes approximately eighty self-related phenomena according to three primary functions involving the self: representing, effecting, and changing. The representing self encompasses the ways in which people depict themselves, either to themselves or to others (e.g., self-concepts, self-presentation). The effecting self concerns ways in which people facilitate or limit their own traits and behaviors (e.g., self-enhancement, self-regulation). The changing self is less time-limited than the regulating self; it concerns phenomena that involve lasting alterations in how people represent and control themselves (e.g., self-expansion, self-development). Each self-related phenomenon within these three categories may be examined at four levels of interacting mechanisms (social, individual, neural, and molecular). We illustrate our approach by focusing on seven self-related phenomena

    Darwin and the Golden Rule: How To Distinguish Differences of Degree from Differences of Kind Using Mechanisms

    Get PDF
    Darwin claimed that human and animal minds differ in degree but not in kind, and that ethical principles such as the Golden Rule are just an extension of thinking found in animals. Both claims are false. The best way to distinguish differences in degree from differences in kind is by identifying mechanisms that have emergent properties. Recursive thinking is an emergent capability found in humans but not in other animals. The Golden Rule and some other ethical principles such as Kant’s categorical imperative require recursion, so they constitute ethical thinking that is restricted to humans. Changes in kind have tipping points resulting from mechanisms with emergent properties

    AI Extenders

    Get PDF
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users.Leverhulme Centre for the Future of Intelligence, Leverhulme Trust, under Grant RC-2015-06
    • …
    corecore