380 research outputs found
Model Verification and the Likelihood Principle
The likelihood principle (LP) is typically understood as a constraint on any measure of evidence arising from a statistical experiment. It is not sufficiently often noted, however, that the LP assumes that the probability model giving rise to a particular concrete data set must be statistically adequate—it must “fit” the data sufficiently. In practice, though, scientists must make modeling assumptions whose adequacy can nevertheless then be verified using statistical tests. My present concern is to consider whether the LP applies to these techniques of model verification. If one does view model verification as part of the inferential procedures that the LP intends to constrain, then there are certain crucial tests of model verification that no known method satisfying the LP can perform. But if one does not, the degree to which these assumptions have been verified is bracketed from the evidential evaluation under the LP. Although I conclude from this that the LP cannot be a universal constraint on any measure of evidence, proponents of the LP may hold out for a restricted version thereof, either as a kind of “ideal” or as defining one among many different forms of evidence
The Principle of Stability
How can inferences from models to the phenomena they represent be justified when those models represent only imperfectly? Pierre Duhem considered just this problem, arguing that inferences from mathematical models of phenomena to real physical applications must also be demonstrated to be approximately correct when the assumptions of the model are only approximately true. Despite being little discussed among philosophers, this challenge was taken up (if only sometimes implicitly) by mathematicians and physicists both contemporaneous with and subsequent to Duhem, yielding a novel and rich mathematical theory of stability with epistemological consequences
Relativistic Spacetime Structure
I survey from a modern perspective what spacetime structure there is according to the general theory of relativity, and what of it determines what else. I describe in some detail both the "standard" and various alternative answers to these questions. Besides bringing many underexplored topics to the attention of philosophers of physics and of science, metaphysicians of science, and foundationally minded physicists, I also aim to cast other, more familiar ones in a new light
Of War or Peace? Essay Review of Statistical Inference as Severe Testing
This is an essay review of Deborah G. Mayo's "Statistical Inference as Severe Testing," Cambridge University Press, 2018
The Principle of Stability
How can inferences from models to the phenomena they represent be justified when those models represent only imperfectly? Pierre Duhem considered just this problem, arguing that inferences from mathematical models of phenomena to real physical applications must also be demonstrated to be approximately correct when the assumptions of the model are only approximately true. Despite being little discussed among philosophers, this challenge was taken up (if only sometimes implicitly) by mathematicians and physicists both contemporaneous with and subsequent to Duhem, yielding a novel and rich mathematical theory of stability with epistemological consequences
On Surplus Structure Arguments
Surplus structure arguments famously identify elements of a theory regarded as excess or superfluous. If there is an otherwise analogous theory that does without such elements, a surplus structure argument prompts adopting it over the one with those elements. Despite their prominence, the form, justification, and range of applicability of such arguments is disputed. I provide an account of these, following Dasgupta ([2016]) for the form, which makes plain the role of observables and observational equivalence. However, I diverge on the justification: instead of demanding that the symmetries of the theory relevant for surplus structure arguments be defined without recourse to any interpretation of those theories, I suggest that the process of identifying what is observable and its consequences for symmetries work in dialog. They settle through a reflective equilibrium that is responsible to new experiments, arguments, and examples. Besides better aligning with paradigmatic uses of the surplus structure argument, this position also has some broader consequences for scope of these arguments and the relationship between symmetry and interpretation more generally
On Representational Capacities, with an Application to General Relativity
Recent work on the hole argument in general relativity by Weatherall (2016b) has drawn attention to the neglected concept of (mathematical) models' representational capacities. I argue for several theses about the structure of these capacities, including that they should be understood not as many-to-one relations from models to the world, but in general as many-to-many relations constrained by the models' isomorphisms. I then compare these ideas with a recent argument by Belot (2017) for the claim that some isometries "generate new possibilities" in general relativity. Philosophical orthodoxy, by contrast, denies this. Properly understanding the role of representational capacities, I argue, reveals how Belot’s rejection of orthodoxy does not go far enough, and makes better sense of our practices in theorizing about spacetime
Computers in Abstraction/Representation Theory
Recently, Horsman et al. (2014) have proposed a new framework, Abstraction/Representation (AR) theory, for understanding and evaluating claims about unconventional or non-standard computation. Among its attractive features, the theory in particular implies a novel account of what is means to be a computer. After expounding on this account, I compare it with other accounts of concrete computation, finding that it does not quite fit in the standard categorization: while it is most similar to some semantic accounts, it is not itself a semantic account. Then I evaluate it according to the six desiderata for accounts of concrete computation proposed by Piccinini (2015). Finding that it does not clearly satisfy some of them, I propose a modification, which I call Agential AR theory, that does, yielding an account that could be a serious competitor to other leading account of concrete computation
Stopping Rules as Experimental Design
A "stopping rule" in a sequential experiment is a rule or procedure for deciding when that experiment should end. Accordingly, the "stopping rule principle" (SRP) states that, in a sequential experiment, the evidential relationship between the final data and an hypothesis under consideration does not depend on the experiment's stopping rule: the same data should yield the same evidence, regardless of which stopping rule was used. In this essay, I reconstruct and rebut five independent arguments for the SRP. Reminding oneself that the stopping rule is a part of an experiment's design and is no more mysterious than many other design aspects helps elucidate why some of these arguments for the SRP are unsound
- …