364 research outputs found

    Intellectual Skill and the Rylean Regress

    Get PDF
    Intelligent activity requires the use of various intellectual skills. While these skills are connected to knowledge, they should not be identified with knowledge. There are realistic examples where the skills in question come apart from knowledge. That is, there are realistic cases of knowledge without skill, and of skill without knowledge. Whether a person is intelligent depends, in part, on whether they have these skills. Whether a particular action is intelligent depends, in part, on whether it was produced by an exercise of skill. These claims promote a picture of intelligence that is in tension with a strongly intellectualist picture, though they are not in tension with a number of prominent claims recently made by intellectualists

    Moral Uncertainty and Its Consequences

    Get PDF

    Intrinsic Properties and Combinatorial Principles

    Get PDF

    Reply to Blackson

    Get PDF
    Thomas Blackson argues that interest-relative epistemologies cannot explain the irrationality of certain choices when the agent has three possible options. I argue that his examples only refute a subclass of interest-relative theories. In particular, they are good objections to theories that say that what an agent knows depends on the stakes involved in the gambles that she faces. But they are not good objections to theories that say that what an agent knows depends on the odds involved in the gambles that she faces. Indeed, the latter class of theories does a better job than interest-invariant epistemologies of explaining the phenomena he describes.PostprintPeer reviewe

    Notes on Some Ideas in Lloyd Humberstone’s Philosophical Applications of Modal Logic

    Get PDF
    Lloyd Humberstone’s recently published Philosophical Applications of Modal Logic presents a number of new ideas in modal logic as well explication and critique of recent work of many others. We extend some of these ideas and answer some questions that are left open in the book

    Margins and Errors

    Get PDF
    Recently, Timothy Williamson has argued that considerations about margins of errors can generate a new class of cases where agents have justified true beliefs without knowledge. I think this is a great argument, and it has a number of interesting philosophical conclusions. In this note I'm going to go over the assumptions of Williamson's argument. I'm going to argue that the assumptions which generate the justification without knowledge are true. I'm then going to go over some of the recent arguments in epistemology that are refuted by Williamson's work. And I'm going to end with an admittedly inconclusive discussion of what we can know when using an imperfect measuring device. Measurement, Justification and Knowledge Williamson's core example involves detecting the angle of a pointer on a wheel by eyesight. For various reasons, I find it easier to think about a slightly different example: measuring a quantity using a digital measurement device. This change has some costs relative to Williamson's version -for one thing, if we are measuring a quantity it might seem that the margin of error is related to the quantity measured. If I eyeball how many stories tall a building is, my margin of error is 0 if the building is 1-2 stories tall, and over 10 if the building is as tall as the World Trade Center. But this problem is not as pressing for digital devices, which are often very unreliable for small quantities. And, at least relative to my preferences, the familiarity of quantities makes up for the loss of symmetry properties involved in angular measurement. To make things explicit, I'll imagine the agent S is using a digital scale. The scale has a margin of error m. That means that if the reading, i.e., the apparent mass is a, then the agent is justified in believing that the mass is in [a − m, a + m]. We will assume that a and m are luminous; i.e., the agent knows their values, and knows she knows them, and so on. This is a relatively harmless idealisation for a; it is pretty clear what a digital scale reads. 1 It is a somewhat less plausible as- † Unpublished. These are some reflections on a paper Timothy Williamson gave at the 2012 CSMN/Arché epistemology conference. Thanks to Derek Ball, Herman Cappelen, Ishani Maitra, Sarah Moss and Robert Weatherson for helpful discussions, as well as audiences at Arché and Edinburgh. 1 This isn't always true. If a scale flickers between reading 832g and 833g, it takes a bit of skill to determine what the reading is. But we'll assume it is clear in this case. On an analogue scale, the luminosity assumption is rather implausible, since it is possible to eyeball with less than perfect accuracy how far between one marker and the next the pointer is. Margins and Errors 2 sumption for m. But we'll assume that S has been very diligent about calibrating her scale, and that the calibration has been recently and skillfully carried out, so in practice m can be assessed very accurately. We'll make three further assumptions about m that strike me as plausible, but which may I guess be challenged. I need to be a bit careful with terminology to set out the first one. I'll use V and v as variables that both pick out the true value of the mass. The difference is that v picks it out rigidly, while V picks out the value of the mass in any world under consideration. Think of V as shorthand for the mass of the object and v as shorthand for the actual mass of the object. (More carefully, V is a random variable, while v is a standard, rigid, variable.) Our first assumption then is that m is also related to what the agent can know. In particular, we'll assume that if the reading a equals v, then the agent can know that V ∈ [a − m, a + m], and can't know anything stronger than that. That is, the margin of error for justification equals, in the best case, the margin of error for knowledge. The second is that the scale has a readout that is finer than m. This is usually the case; the last digit on a digital scale is often not significant. The final assumption is that it is metaphysically possible that the scale has an error on an occasion that is greater than m. This is a kind of fallibilism assumption -saying that the margin of error is m does not mean there is anything incoherent about talking about cases where the error on an occasion is greater than m. This error term will do a lot of work in what follows, so I'll use e to be the error of the measurement, i.e., |a − v|. For ease of exposition, I'll assume that a ≥ v, i.e., that any error is on the high side. But this is entirely dispensible, and just lets me drop some disjunctions later on. Now we are in a position to state Williamson's argument. Assume that on a particular occasion, 0 < e < m. Perhaps v = 830, m = 10 and a = 832, so e = 2. Williamson appears to make the following two assumptions. 2 1. The agent is justified in believing what they would know if appearances matched reality, i.e., if V equalled a. 2. The agent cannot come to know something about V on the basis of a suboptimal measurement that they could not also know on the basis of an optimal measurement. I'm assuming here that the optimal measurement displays the correct mass. I don't assume the actual measurement is wrong. That would require saying something implausible about the semantic content of the display. It's not obvious that the display has a content that could be true or false, and if it does have such a 2 I'm not actually sure whether Williamson makes the first, or thinks it is the kind of thing anyone who thinks justification is prior to knowledge should make. Margins and Errors 3 content it might be true. (For instance, the content might be that the object on the scale has a mass near to a, or that with a high probability it has a mass near to a, and both of those things are true.) But the optimal measurement would be to have a = v, and in this sense the measurement is suboptimal. The argument then is pretty quick. From the first assumption, we get that the agent is justified in believing that V ∈ [a − m, a + m]. Assume then that the agent forms this justified belief. This belief is incompatible with V ∈ [v − m, a − m). But if a equalled v, then the agent wouldn't be in a position to rule out that V ∈ [v − m, a − m). So by premise 2 she can't knowledgeably rule it out on the basis of a mismeasurement. So her belief that V ≥ a − m cannot be knowledge. So this justified true belief is not knowledge. If you prefer doing this with numbers, here's the way the example works using the numbers above. The mass of the object is 830. So if the reading was correct, the agent would know just that the mass is between 820 and 840. The reading is 832. So she's justified in believing, and we'll assume she does believe, that the mass is between 822 and 842. That belief is incompatible with the mass being 821. But by premise 2 she can't know the mass is greater than 821. So the belief doesn't amount to knowledge, despite being justified and, crucially, true. After all, 830 is between 822 and 842, so her belief that the mass is in this range is true. So simple reflections on the workings on measuring devices let us generate cases of justified true beliefs that are not knowledge. I'll end this section with a couple of objections and replies

    On Uncertainty

    Get PDF
    This dissertation looks at a set of interconnected questions concerning the foundations of probability, and gives a series of interconnected answers. At its core is a piece of old-fashioned philosophical analysis, working out what probability is. Or equivalently, investigating the semantic question of what is the meaning of ‘probability’? Like Keynes and Carnap, I say that probability is degree of reasonable belief. This immediately raises an epistemological question, which degrees count as reasonable? To solve that in its full generality would mean the end of human inquiry, so that won’t be attempted here. Rather I will follow tradition and merely investigate which sets of partial beliefs are coherent. The standard answer to this question, what is commonly called the Bayesian answer, says that degrees of belief are coherent iff they form a probability function. I disagree with the way this is usually justified, but subject to an important qualification I accept the answer. The important qualification is that degrees of belief may be imprecise, or vague. Part one of the dissertation, chapters 1 to 6, looks largely at the consequences of this qualification for the semantic and epistemological questions already mentioned. It turns out that when we allow degrees of belief to be imprecise, we can discharge potentially fatal objections to some philosophically attractive theses. Two of these, that probability is degree of reasonable belief and that the probability calculus provides coherence constraints on partial beliefs, have been mentioned. Others include the claim, defended in chapter 4, that chance is probability given total history. As well as these semantic and epistemological questions, studies of the foundations of probability usually include a detailed discussion of decision theory. For reasons set out in chapter 2, I deny we can gain epistemological insights from decision theory. Nevertheless, it is an interesting field to study on its own, and it might be expected that there would be decision theoretic consequences of allowing imprecise degrees of belief. As I show in part two, this expectation seems to be mistaken. Chapter 9 shows that there aren’t interesting consequences of this theory for decision theory proper, and chapters 10 and 11 show that Keynes’s attempt to use imprecision in degrees of belief to derive a distinctive theory of interest rates is unsound. Chapters 7 and 8 provide a link between these two parts. In chapter 7 I look at some previous philosophical investigations into the effects of imprecision. In chapter 8 I develop what I take to be the best competitor to the theory defended here – a constructivist theory of probability. On this view degrees of belief are precise, but the relevant coherence constraint is a constructivist probability calculus. This view is, I think, mistaken, but the calculus has some intrinsic interest, and there are at least enough arguments for it to warrant a chapter-length examination

    Luminous margins

    Get PDF

    Epistemicism, parasites, and vague names

    Get PDF

    Humeans Aren’t Out of their Minds

    Get PDF
    Humeanism is “the thesis that the whole truth about a world like ours supervenes on the spatiotemporal distribution of local qualities. ” (Lewis 1994: 473) Since the whole truth about our world contains truths about causation, causation must be located in the mosaic of local qualities that the Humean says constitute the whole truth about the world. The most natural ways to do this involve causation being in some sense extrinsic. To take the simplest possible Humean analysis, we might say that c causes e iff throughout the mosaic events of the same type as c are usually followed by events of type e. For short, the causal relation is the constant conjunction relation. Whether this obtains is determined by the mosaic, so this is a Humean theory, but it isn’t determined just by c and e themselves, so whether c causes e is extrinsic to the pair. Now this is obviously a bad theory of causation, but the fact that causation is extrinsic is retained even by good Humean theories of causation. John Hawthorne (2004) objects to this feature of Humeanism. I’m going to argue that his arguments don’t work, but first we need to clear up three preliminaries about causation and intrinsicness. First, my wording so far has been cagey because I haven’t wanted to say that Humeans typically take causation to be an extrinsic relation. That’s because the greatest Humean of the
    corecore