26 research outputs found

    Gauging Fine-Tuning

    Get PDF
    We introduce a mathematical framework for quantifying fine-tuning in general physical settings. In particular, we identify two distinct perspectives on fine-tuning, namely, a local and a global perspective --- and develop corresponding measures. These measures apply broadly to settings characterized by an arbitrary number of observables whose values are dependent on an arbitrary number of parameters. We illustrate our formalism by quantifying fine-tuning as it arises in two pertinent astrophysical settings: (i) in models where a significant fraction of the dark matter in the universe is in the form of primordial black holes, and (ii) in scenarios that derive the fraction of protons in habitable dark-matter halos from underlying models of cosmic inflation.Comment: 13 pages, 6 figure

    Scientific Realism and Primordial Cosmology

    Get PDF
    We discuss scientific realism from the perspective of modern cosmology, especially primordial cosmology: i.e. the cosmological investigation of the very early universe. We first (Section 2) state our allegiance to scientific realism, and discuss what insights about it cosmology might yield, as against "just" supplying scientific claims that philosophers can then evaluate. In particular, we discuss: the idea of laws of cosmology, and limitations on ascertaining the global structure of spacetime. Then we review some of what is now known about the early universe (Section 3): meaning, roughly, from a thousandth of a second after the Big Bang onwards(!). The rest of the paper takes up two issues about primordial cosmology, i.e. the very early universe, where "very early" means, roughly, much earlier (logarithmically) than one second after the Big Bang: say, less than 10−1110^{-11} seconds. Both issues illustrate that familiar philosophical threat to scientific realism, the under-determination of theory by data---on a cosmic scale. The first issue (Section 4) concerns the difficulty of observationally probing the very early universe. More specifically, the difficulty is to ascertain details of the putative inflationary epoch. The second issue (Section 5) concerns difficulties about confirming a cosmological theory that postulates a multiverse, i.e. a set of domains (universes) each of whose inhabitants (if any) cannot directly observe, or otherwise causally interact with, other domains. This again concerns inflation, since many inflationary models postulate a multiverse. For all these issues, it will be clear that much remains unsettled, as regards both physics and philosophy. But we will maintain that these remaining controversies do not threaten scientific realism.Comment: 52 pages. An abridged version will appear in "The Routledge Handbook of Scientific Realism", ed. Juha Saats

    Three Aspects of Typicality in Multiverse Cosmology

    Get PDF
    Extracting predictions from cosmological theories that describe a multiverse, for what we are likely to observe in our domain, is crucial to establishing the validity of these theories. One way to extract such predictions is from theory-generated probability distributions that allow for selection effects---generally expressed in terms of assumptions about anthropic conditionalization and how typical we are. In this paper, I urge three lessons about typicality in multiverse settings. (i) Because it is difficult to characterize our observational situation in the multiverse, we cannot assume that we are typical (as in the 'principle of mediocrity'): nor can we ignore the issue of typicality, for it has a measurable impact on predictions for our observations. (ii) There are spectra of assumptions about both conditionalization and typicality, which lead to coincident predictions for our observations, leading to problems of confirmation in multiverse cosmology. And moreover, (iii) when one has the freedom to consider competing theories of the multiverse, the assumption of typicality may not lead to the highest likelihoods for our observations. These three entwined aspects of typicality imply that positive assertions about our typicality, such as the 'principle of mediocrity', are more questionable than has been recently claimed

    Finely tuned models sacrifice explanatory depth

    Get PDF
    It is commonly argued that an undesirable feature of a theoretical or phenomenological model is that salient observables are sensitive to values of parameters in the model. But in what sense is it undesirable to have such 'fine-tuning' of observables (and hence of the underlying model)? In this paper, we argue that the fine-tuning can be interpreted as a shortcoming of the explanatory capacity of the model: in particular it signals a lack of explanatory depth. In support of this argument, we develop a schema -- for (a certain class of) models that arise broadly in physical settings -- that quantitatively relates fine-tuning of observables to a lack of depth of explanations based on these models. We apply our schema in two different settings in which, within each setting, we compare the depth of two competing explanations. The first setting involves explanations for the Euclidean nature of spatial slices of the universe today: in particular, we compare an explanation provided by the big-bang model of the early 1970s (where no inflationary period is included) with an explanation provided by a general model of cosmic inflation. The second setting has a more phenomenological character, where the goal is to infer from a limited sequence of data points, using maximum entropy techniques, the underlying probability distribution from which these data are drawn. In both of these settings we find that our analysis favors the model that intuitively provides the deeper explanation of the observable(s) of interest. We thus provide an account that relates two 'theoretical virtues' of models used broadly in physical settings -- namely, a lack of fine-tuning and explanatory depth -- and argue that finely tuned models sacrifice explanatory depth.Comment: 21 pages, 1 figure. Further discussion of some background issues; quantitative results unchanged; updated reference

    Finely tuned models sacrifice explanatory depth

    Get PDF
    It is commonly argued that an undesirable feature of a theoretical or phenomenological model is that salient observables are sensitive to values of parameters in the model. But in what sense is it undesirable to have such 'fine-tuning' of observables (and hence of the underlying model)? In this paper, we argue that the fine-tuning can be interpreted as a shortcoming of the explanatory capacity of the model: in particular it signals a lack of explanatory depth. In support of this argument, we develop a scheme---for models that arise broadly in the sciences---that quantitatively relates fine-tuning of observables described by these models to a lack of depth of explanations based on these models. A significant aspect of our scheme is that, broadly speaking, the inclusion of larger numbers of parameters in a model will decrease the depth of the corresponding explanation. To illustrate our scheme, we apply it in two different settings in which, within each setting, we compare the depth of two competing explanations. The first setting involves explanations for the Euclidean nature of spatial slices of the universe today: in particular, we compare an explanation provided by the big-bang model of the early 1970s (namely, a cosmological model that traces the evolution of the universe back to a singularity without encountering an inflationary period) with an explanation provided by a general model of cosmic inflation. The second setting has a more phenomenological character, where the goal is to infer from a limited sequence of data points, using maximum entropy techniques, the underlying probability distribution from which these data are drawn. In both of these settings we find that our analysis favors the model that intuitively provides the deeper explanation of the observable(s) of interest. We thus provide an account that unifies two 'theoretical virtues' of models used broadly in the sciences---namely, a lack of fine-tuning and explanatory depth---to show that, indeed, finely tuned models sacrifice explanatory depth

    Effective field theories as a novel probe of fine-tuning of cosmic inflation

    Get PDF
    The leading account of several salient observable features of our universe today is provided by the theory of cosmic inflation. But an important and thus far intractable question is whether inflation is generic, or whether it is finely tuned---requiring very precisely specified initial conditions. In this paper I argue that a recent, model-independent characterization of inflation, known as the 'effective field theory (EFT) of inflation', promises to address this question in a thoroughly modern and significantly more comprehensive way than in the existing literature. To motivate and provide context for this claim, I distill three core problems with the theory of inflation, which I dub the permissiveness problem, the initial conditions problem, and the multiverse problem. I argue that the initial conditions problem lies within the scope of EFTs of inflation as they are currently conceived, whereas the other two problems remain largely intractable: their solution must await a more complete description of the very early universe. I highlight recent work that addresses the initial conditions problem within the context of a dynamical systems analysis of a specific (state-of-the-art) EFT of inflation, and conclude with a roadmap for how such work might be extended to realize the promise claimed above
    corecore