9,362 research outputs found

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    Epistemic Akrasia and Epistemic Reasons

    Get PDF
    It seems that epistemically rational agents should avoid incoherent combinations of beliefs and should respond correctly to their epistemic reasons. However, some situations seem to indicate that such requirements cannot be simultaneously satisfied. In such contexts, assuming that there is no unsolvable dilemma of epistemic rationality, either (i) it could be rational that one’s higher-order attitudes do not align with one’s first-order attitudes or (ii) requirements such as responding correctly to epistemic reasons that agents have are not genuine rationality requirements. This result doesn’t square well with plausible theoretical assumptions concerning epistemic rationality. So, how do we solve this puzzle? In this paper, I will suggest that an agent can always reason from infallible higher-order reasons. This provides a partial solution to the above puzzle

    Paradigms, possibilities and probabilities: Comment on Hinterecker et al. (2016)

    Get PDF
    Hinterecker et al. (2016) compared the adequacy of the probabilistic new paradigm in reasoning with the recent revision of mental models theory (MMT) for explaining a novel class of inferences containing the modal term “possibly”. For example, the door is closed or the window is open or both, therefore, possibly the door is closed and the window is open (A or B or both, therefore, possibly(A & B)). They concluded that their results support MMT. In this comment, it is argued that Hinterecker et al. (2016) have not adequately characterised the theory of probabilistic validity (p-validity) on which the new paradigm depends. It is unclear how p-validity can be applied to these inferences, which are anyway peripheral to the theory. It is also argued that the revision of MMT is not well motivated and its adoption leads to many logical absurdities. Moreover, the comparison is not appropriate because these theories are defined at different levels of computational explanation. In particular, revised MMT lacks a provably consistent computational level theory that could justify treating these inferences as valid. It is further argued that the data could result from the non-colloquial locutions used to express the premises. Finally, an alternative pragmatic account is proposed based on the idea that a conclusion is possible if what someone knows cannot rule it out. This account could be applied to the unrevised mental model theory rendering the revision redundant

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
    corecore