1,423 research outputs found

    Models Dont Decompose That Way: A Holistic View of Idealized Models

    Get PDF
    Many (if not most) accounts of scientific modelling assume that models can be decomposed into the contributions made by their accurate and inaccurate parts. These accounts then argue that the inaccurate parts of the model can be justified by distorting only what is irrelevant. In this article, I argue that this decompositional strategy requires three assumptions that are not typically met by our best scientific models. In response, I propose an alternative view in which idealized models are characterized as holistically distorted representations that are justified by allowing for the application of various (mathematical) modelling techniques

    Models Dont Decompose That Way: A Holistic View of Idealized Models

    Get PDF
    Many (if not most) accounts of scientific modelling assume that models can be decomposed into the contributions made by their accurate and inaccurate parts. These accounts then argue that the inaccurate parts of the model can be justified by distorting only what is irrelevant. In this article, I argue that this decompositional strategy requires three assumptions that are not typically met by our best scientific models. In response, I propose an alternative view in which idealized models are characterized as holistically distorted representations that are justified by allowing for the application of various (mathematical) modelling techniques

    Universality and Modeling Limiting Behaviors

    Get PDF
    Most attempts to justify the use of idealized models to explain appeal to the accuracy of the model with respect to difference-making causes. In this paper, I argue for an alternative way to justify using idealized models to explain that appeals to universality classes. In support of this view, I show that scientific modelers seeking to explain stable limiting behaviors often explicitly appeal to universality classes in order to justify their use of idealized models to explain

    Understanding realism

    Get PDF
    Catherine Elgin has recently argued that a nonfactive conception of understanding is required to accommodate the epistemic successes of science that make essential use of idealizations and models. In this paper, I argue that the fact that our best scientific models and theories are pervasively inaccurate representations can be made compatible with a more nuanced form of scientific realism that I call Understanding Realism. According to this view, science aims at (and often achieves) factive scientific understanding of natural phenomena. I contend that this factive scientific understanding is provided by grasping a set of true modal information about the phenomenon of interest. Furthermore, contrary to Elgin’s view, I argue that the facticity of this kind of scientific understanding can be separated from the inaccuracy of the models and theories used to produce it

    Critical Review of Leveraging Distortion, by Collin Rice.

    Get PDF
    A critical review of Collin Rice's book Leveraging Distortion. Summer 2023, Philosophical Review

    Model Explanation versus Model-Induced Explanation

    Get PDF
    Scientists appeal to models when explaining phenomena. Such explanations are often dubbed model explanations or model-based explanations (short: ME). But what are the precise conditions for ME? Are ME special explanations? In our paper, we first rebut two definitions of ME and specify a more promising one. Based on this analysis, we single out a related conception that is concerned with explanations that are induced from working with a model. We call them ‘model-induced explanations’ (MIE). Second, we study three paradigmatic cases of alleged ME. We argue that all of them are MIE, upon closer examination. Third, we argue that this undermines the building consensus that model explanations are special explanations that, e.g., challenge the factivity of explanation. Instead, it suggests that what is special about models in science is the epistemology behind how models induce explanations

    Universality caused: the case of renormalization group explanation

    Get PDF
    Recently, many have argued that there are certain kinds of abstract mathematical explanations that are noncausal. In particular, the irrelevancy approach suggests that abstracting away irrelevant causal details can leave us with a noncausal explanation. In this paper, I argue that the common example of Renormalization Group explanations of universality used to motivate the irrelevancy approach deserves more critical attention. I argue that the reasons given by those who hold up RG as noncausal do not stand up to critical scrutiny. As a result, the irrelevancy approach and the line between casual and noncausal explanation deserves more scrutiny

    The puzzle of model-based explanation

    Get PDF
    Among the many functions of models, explanation is central to the functioning and aims of science. However, the discussions surrounding modeling and explanation in philosophy have largely remained separate from each other. This chapter seeks to bridge the gap by focusing on the puzzle of model-based explanation, asking how different philosophical accounts answer the following question: if idealizations and fictions introduce falsehoods into models, how can idealized and fictional models provide true explanations? The chapter provides a selective and critical overview of the available strategies for solving this puzzle, mainly focusing on idealized models and how they explain

    Causal and Non-Causal Explanations of Artificial Intelligence

    Get PDF
    Deep neural networks (DNNs), a particularly effective type of artificial intelligence, currently lack a scientific explanation. The philosophy of science is uniquely equipped to handle this problem. Computer science has attempted, unsuccessfully, to explain DNNs. I review these contributions, then identify shortcomings in their approaches. The complexity of DNNs prohibits the articulation of relevant causal relationships between their parts, and as a result causal explanations fail. I show that many non-causal accounts, though more promising, also fail to explain AI. This highlights a problem with existing accounts of scientific explanation rather than with AI or DNNs

    Factive inferentialism and the puzzle of model-based explanation

    Get PDF
    Highly idealized models may serve various epistemic functions, notably explanation, in virtue of representing the world. Inferentialism provides a prima facie compelling characterization of what constitutes the representation relation. In this paper, I argue that what I call factive inferentialism does not provide a satisfactory solution to the puzzle of model-based — factive — explanation. In particular, I show that making explanatory counterfactual inferences is not a sufficient guide for accurate representation, factivity, or realism. I conclude by calling for a more explicit specification of model-world mismatches and properties imputation
    • …
    corecore