159,376 research outputs found

    Epistemic Norms and Epistemic Accountability

    Get PDF
    Everyone agrees that not all norms that govern belief and assertion are epistemic. But not enough attention has been paid to distinguishing epistemic norms from others. Norms in general differ from merely evaluative standards in virtue of the fact that it is fitting to hold subjects accountable for violating them, provided they lack an excuse. Different kinds of norm are most readily distinguished by their distinctive mode of accountability. My thesis is roughly that a norm is epistemic if and only if its violation makes it fitting to reduce epistemic trust in the subject, even if there is no doubt about their sincerity, honesty, or other moral virtues. That is, violations of epistemic norms don’t merit resentment or other forms of blame, but rather deduction of credibility points in internal scorekeeping and related attitudinal and behavioral changes. As Fricker’s work on epistemic injustice shows, such distrust is undesirable from the point of view of an epistemic agent. Consequently, when one manifests epistemic distrust towards a subject in suitable circumstances, it amounts a way of holding her accountable. Since this form of accountability involves no opprobrium, there is good reason to think it is not linked to voluntary control in the same way as moral accountability. Finally, I make use of this account of what makes epistemic norms distinctive to point out some faulty diagnostics in debates about norms of assertion. My aim is not to defend any substantive view, however, but only to offer tools for identifying the right kind of evidence for epistemic norms

    Checking-in on Network Functions

    Full text link
    When programming network functions, changes within a packet tend to have consequences---side effects which must be accounted for by network programmers or administrators via arbitrary logic and an innate understanding of dependencies. Examples of this include updating checksums when a packet's contents has been modified or adjusting a payload length field of a IPv6 header if another header is added or updated within a packet. While static-typing captures interface specifications and how packet contents should behave, it does not enforce precise invariants around runtime dependencies like the examples above. Instead, during the design phase of network functions, programmers should be given an easier way to specify checks up front, all without having to account for and keep track of these consequences at each and every step during the development cycle. In keeping with this view, we present a unique approach for adding and generating both static checks and dynamic contracts for specifying and checking packet processing operations. We develop our technique within an existing framework called NetBricks and demonstrate how our approach simplifies and checks common dependent packet and header processing logic that other systems take for granted, all without adding much overhead during development.Comment: ANRW 2019 ~ https://irtf.org/anrw/2019/program.htm

    Group assertion and group silencing

    Get PDF
    Jennifer Lackey (2018) has developed an account of the primary form of group assertion, according to which groups assert when a suitably authorized spokesperson speaks for the group. In this paper I pose a challenge for Lackey's account, arguing that her account obscures the phenomenon of group silencing. This is because, in contrast to alternative approaches that view assertions (and speech acts generally) as social acts, Lackey's account implies that speakers can successfully assert regardless of how their utterances are taken up by their audiences. What reflection on group silencing shows us, I argue, is that an adequate account of group assertion needs to find a place for audience uptake

    How to do things with modals

    Get PDF
    Mind &Language, Volume 35, Issue 1, Page 115-138, February 2020

    Truth Serum, Liar Serum, and Some Problems About Saying What You Think is False

    Get PDF
    This chapter investigates the conflict between thought and speech that is inherent in lying. This is the conflict of saying what you think is false. The chapter shows how stubbornly saying what you think is false resists analysis. In traditional analyses of lying, saying what you think is false is analyzed in terms of saying something and believing that it is false. But standard cases of unconscious or divided belief challenge these analyses. Classic puzzles about belief from Gottlob Frege and Saul Kripke show that suggested amendments involving assent instead of belief do not fare better. I argue that attempts to save these analyses by appeal to guises or Fregean modes of presentation will also run into trouble. I then consider alternative approaches to untruthfulness that focus on (a) expectations for one’s act of saying/asserting and (b) the intentions involved in one’s act of saying/asserting. Here I introduce two new kinds of case, which I call “truth serum” and “liar serum” cases. Consideration of these cases reveals structural problems with intention- and expectation-based approaches as well. Taken together, the string of cases presented suggests that saying what you think is false, or being untruthful, is no less difficult and interesting a subject for analysis than lying itself. Tackling the question of what it is to say what you think is false illuminates ways in which the study of lying is intertwined with fundamental issues in the nature of intentional action

    Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization

    Full text link
    A deeper understanding of video activities extends beyond recognition of underlying concepts such as actions and objects: constructing deep semantic representations requires reasoning about the semantic relationships among these concepts, often beyond what is directly observed in the data. To this end, we propose an energy minimization framework that leverages large-scale commonsense knowledge bases, such as ConceptNet, to provide contextual cues to establish semantic relationships among entities directly hypothesized from video signal. We mathematically express this using the language of Grenander's canonical pattern generator theory. We show that the use of prior encoded commonsense knowledge alleviate the need for large annotated training datasets and help tackle imbalance in training through prior knowledge. Using three different publicly available datasets - Charades, Microsoft Visual Description Corpus and Breakfast Actions datasets, we show that the proposed model can generate video interpretations whose quality is better than those reported by state-of-the-art approaches, which have substantial training needs. Through extensive experiments, we show that the use of commonsense knowledge from ConceptNet allows the proposed approach to handle various challenges such as training data imbalance, weak features, and complex semantic relationships and visual scenes.Comment: Accepted to WACV 201
    • …
    corecore