1,101 research outputs found

    Enactive-Dynamic Social Cognition and Active Inference

    Get PDF
    The aim of this paper is twofold: it critically analyses and rejects accounts blending active inference as theory of mind and enactivism; and it advances an enactivist-dynamic account of social cognition that is compatible with active inference. While some inference models of social cognition seemingly take an enactive perspective on social cognition, they explain it as the attribution of mental states to other people, via representational machinery, in line with Theory of Mind (ToM). Holding both enactivism and ToM, we argue, entails contradiction and confusion due to two ToM assumptions rejected by enactivism: (1) that social cognition reduces to mental representation and (2) cognition must be hardwired with a social cognition contentful “toolkit” or “starter pack” for fueling the model-like theorising supposed in (1). The paper offers a positive alternative, one that avoids contradictions or confusions. After clarifying the profile of social cognition under enactivism, i.e. without assumptions (1) and (2), the last section advances an enactivist-dynamic model of cognition as dynamic, real time, fluid, dynamic, contextual social action, where we use the formalisms of dynamical systems theory to explain the origins of sociocognitive novelty in developmental change and active inference as a tool to explain social understanding as generalised synchronisation

    Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing

    Full text link
    Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence. Deep learning is based on computational models that are, to a certain extent, bio-inspired, as they rely on networks of connected simple computing units operating in parallel. Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring. These successes have been mostly supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raise the question of whether the described progress will be slowed or halted due to hardware limitations. This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks. Central themes are the reliance on non-von-Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in artificial intelligence, we briefly discuss an example based reservoir computing. We conclude the review by speculating on the big picture view of future neuromorphic and brain-inspired computing systems.Comment: Keywords: memristor, neuromorphic, AI, deep learning, spiking neural networks, in-memory computin
    corecore