22 research outputs found

    Anankastic conditionals are still a mystery

    Get PDF
    ‘If you want to go to Harlem, you have to take the A train’ doesn’t look special. Yet a compositional account of its meaning, and the meaning of anankastic conditionals more generally, has proven an enigma. Semanticists have responded by assigning anankastics a unique status, distinguishing them from ordinary indicative conditionals. Condoravdi & Lauer (2016) maintain instead that “anankastic conditionals are just conditionals.” I argue that Condoravdi and Lauer don’t give a general solution to a well-known problem: the problem of conflicting goals. They rely on a special, “effective preference” interpretation for want on which an agent cannot want two things that conflict with her beliefs. A general solution, though, requires that the goals cannot conflict with the facts. Condoravdi and Lauer’s view fails. Yet they show, I believe, that previous accounts fail too. Anankastic conditionals are still a mystery

    I want to, but...

    Get PDF
    I want to see the concert, but I don’t want to take the long drive. Both of these desire ascriptions are true, even though I believe I’ll see the concert if and only if I take the drive.Yet they, and strongly conflicting desire ascriptions more generally, are predicted incompatible by the standard semantics, given two standard constraints. There are two proposed solutions. I argue that both face problems because they misunderstand how what we believe influences what we desire. I then sketch my own solution: a coarse-worlds semantics that captures the extent to which belief influences desire. My semantics models what I call some-things-considered desire. Considering what the concert would be like, but ignoring the drive, I want to see the concert; considering what the drive would be like, but ignoring the concert, I don’t want to take the drive

    What does decision theory have to do with wanting?

    Get PDF
    Decision theory and folk psychology both purport to represent the same phenomena: our belief-like and desire- and preference-like states. They also purport to do the same work with these representations: explain and predict our actions. But they do so with different sets of concepts. There's much at stake in whether one of these two sets of concepts can be accounted for with the other. Without such an account, we'd have two competing representations and systems of prediction and explanation, a dubious dualism. Folk psychology structures our daily lives and has proven fruitful in the study of mind and ethics, while decision theory is pervasive in various disciplines, including the quantitative social sciences, especially economics, and philosophy. My interest is in accounting for folk psychology with decision theory -- in particular, for believe and wanting, which decision theory omits. Many have attempted this task for belief. (The Lockean Thesis says that there is such an account.) I take up the parallel task for wanting, which has received far less attention. I propose necessary and sufficient conditions, stated in terms of decision theory, for when you're truly said to want; I give an analogue of the Lockean Thesis for wanting. My account is an alternative to orthodox accounts that link wanting to preference (e.g. Stalnaker (1984), Lewis (1986)), which I argue are false. I argue further that want ascriptions are context-sensitive. My account explains this context-sensitivity, makes sense of conflicting desires, and accommodates phenomena that motivate traditional theses on which 'want' has multiple senses (e.g. all-things-considered vs. pro tanto)

    (Counter)factual want ascriptions and conditional belief

    Get PDF
    What are the truth conditions of want ascriptions? According to a highly influential and fruitful approach, championed by Heim (1992) and von Fintel (1999), the answer is intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff within S’s belief set, S prefers the p worlds to the ~p worlds. This approach faces a well-known and as-yet unsolved problem, however: it makes the entirely wrong predictions with what we call '(counter)factual want ascriptions', wherein the agent either believes p or believes ~p—e.g., ‘I want it to rain tomorrow and that is exactly what is going to happen’ or ‘I want this weekend to last forever but of course it will end in a few hours’. We solve this problem. The truth conditions for want ascriptions are, we propose, connected to the agent’s conditional beliefs. We bring out this connection by pursuing a striking parallel between (counter)factual and non-(counter)factual want ascriptions on the one hand and counterfactual and indicative conditionals on the other

    Algorithmic neutrality

    Get PDF
    Bias infects the algorithms that weird increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have an eye to algorithmic neutrality, what can we learn about algorithmic bias? To answer these questions in concrete terms, I work with a case study: search engines. Drawing on work about neutrality in science, I say that a search engine is neutral only if certain values—like political ideologies or the financial interests of the search engine operator—play no role in how the search engine ranks pages. Search neutrality, I argue, is impossible. Its impossibility seems to threaten the significance of search bias: if no search engine is neutral, then every search engine is biased. To defuse this threat, I distinguish two forms of bias—failing-on-its-own-terms bias and other-values bias. This distinction allows us to make sense of search bias—and capture its normative complexion—despite the impossibility of neutrality

    Desiderative Lockeanism

    Get PDF
    According to the Desiderative Lockean Thesis, there are necessary and sufficient conditions, stated in the terms of decision theory, for when one is truly said to want. I advance a new Desiderative Lockean view. My view is distinctive in being doubly context-sensitive. Want ascriptions exhibit a remarkable context-sensitivity: what a person is truly said to want varies by context in a variety of ways, a fact that has not been fully appreciated. Others Desiderative Lockeans attempt to capture the context-sensitivity in want ascriptions by positing a single context-sensitive parameter. I posit two. Only with a doubly context-sensitive view can we explain a range of facts that go unexplained by all other Desiderative Lockean views

    Getting what you want

    Get PDF
    It is commonly accepted that if an agent wants p, then she has a desire that is satisfied in exactly the worlds where p is true. Call this the ‘Satisfaction-is-Truth Principle’. We argue that this principle is false: an agent may want p without having a desire that is satisfied when p obtains in any old way. For example, Millie wants to drink milk but does not have a desire that is satisfied when she drinks spoiled milk. Millie has a desire whose satisfaction conditions are what we call ways-specific. Fara (Philos Perspect 17(1):141–163, 2003, NoĂ»s 47(2):250–272, 2013) and Lycan (Philos Perspect 26(1):201–215, 2012, In what sense is desire a propositional attitude?, Unpublished manuscript) have also argued for this conclusion, but their claims about desire satisfaction rest solely on contested intuitions about when agents get what they want. We set these intuitions to one side, instead arguing that desire satisfaction is ways-specific by appealing to the dispositional role of desire. Because agents are disposed to satisfy their desires, dispositions provide important evidence about desire satisfaction. Our argument also provides new insight on the dispositional role of desire satisfaction

    We might be afraid of black-box algorithms

    Get PDF
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. In ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI,’ Juan Durán and Karin Jongsma seek to allay such fears. While we find some of their arguments compelling, we still see reasons for fear

    FairTargetSim: An Interactive Simulator for Understanding and Explaining the Fairness Effects of Target Variable Definition

    Full text link
    Machine learning requires defining one's target variable for predictions or decisions, a process that can have profound implications on fairness: biases are often encoded in target variable definition itself, before any data collection or training. We present an interactive simulator, FairTargetSim (FTS), that illustrates how target variable definition impacts fairness. FTS is a valuable tool for algorithm developers, researchers, and non-technical stakeholders. FTS uses a case study of algorithmic hiring, using real-world data and user-defined target variables. FTS is open-source and available at: http://tinyurl.com/ftsinterface. The video accompanying this paper is here: http://tinyurl.com/ijcaifts
    corecore