29 research outputs found

    Bridging the gap between theory and practice of approximate Bayesian inference

    Full text link

    Intentional Communication: Computationally Easy or Difficult?

    Get PDF
    Human intentional communication is marked by its flexibility and context sensitivity. Hypothesized brain mechanisms can provide convincing and complete explanations of the human capacity for intentional communication only insofar as they can match the computational power required for displaying that capacity. It is thus of importance for cognitive neuroscience to know how computationally complex intentional communication actually is. Though the subject of considerable debate, the computational complexity of communication remains so far unknown. In this paper we defend the position that the computational complexity of communication is not a constant, as some views of communication seem to hold, but rather a function of situational factors. We present a methodology for studying and characterizing the computational complexity of communication under different situational constraints. We illustrate our methodology for a model of the problems solved by receivers and senders during a communicative exchange. This approach opens the way to a principled identification of putative model parameters that control cognitive processes supporting intentional communication

    Computational Cognitive Neuroscience

    Get PDF
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” (or reverse-engineering) strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory

    Adversariality and Ideal Argumentation: A Second-Best Perspective

    Get PDF
    What is the relevance of ideals for determining virtuous argumentative practices? According to Bailin and Battersby (2016), the telos of argumentation is to improve our cognitive systems, and adversariality plays no role in ideally virtuous argumentation. Stevens and Cohen (2019) grant that ideal argumentation is collaborative, but stress that imperfect agents like us should not aim at approximating the ideal of argumentation. Accordingly, it can be virtuous, for imperfect arguers like us, to act as adversaries. Many questions are left unanswered by both camps. First, how do we conceptualize an ideal and its approximation? Second, how can we determine what is the ideal of argumentation? Third, can we extend Stevens and Cohen’s anti-approximation argument beyond virtue theory? In order to respond to these questions, this paper develops a second-best perspective on ideal argumentation. The Theory of the Second Best is a formal contribution to the field of utility (or welfare) optimization. Its main conclusion is that, in non-ideal circumstances, approximating ideals might be suboptimal

    Bayesian Cognitive Science, Monopoly, and Neglected Frameworks

    Get PDF
    A widely shared view in the cognitive sciences is that discovering and assessing explanations of cognitive phenomena whose production involves uncertainty should be done in a Bayesian framework. One assumption supporting this modelling choice is that Bayes provides the best approach for representing uncertainty. However, it is unclear that Bayes possesses special epistemic virtues over alternative modelling frameworks, since a systematic comparison has yet to be attempted. Currently, it is then premature to assert that cognitive phenomena involving uncertainty are best explained within the Bayesian framework. As a forewarning, progress in cognitive science may be hindered if too many scientists continue to focus their efforts on Bayesian modelling, which risks to monopolize scientific resources that may be better allocated to alternative approaches

    On the computational complexity of ethics: moral tractability for minds and machines

    Get PDF
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marr’s three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the moral tractability thesis
    corecore