8,793 research outputs found

    ‘Putting apes (body and language) together again’, a review article of Savage-Rumbaugh, S., Taylor, T. J., and Shanker, S. G. Apes, Language, and the Human Mind (Oxford: 1999) and Clark, A. Being There: Putting Brain, Body, and World Together Again (MIT: 1997)

    Get PDF
    It is argued that the account of Savage-Rumbaugh’s ape language research in Savage-Rumbaugh, Shanker and Taylor (1998. Apes, Language and the Human Mind. Oxford University Press, Oxford) is proïŹtably read in the terms of the theoretical perspective developed in Clark (1997. Being There, Putting Brain, Body and World Together Again. MIT Press, Cambridge, MA). The former work details some striking results concerning chimpanzee and bonobo subjects, trained to make use of keyboards containing ‘lexigram’ symbols. The authors, though, make heavy going of a critique of what they take to be standard approaches to understanding language and cognition in animals, and fail to offer a worthwhile theoretical position from which to make sense of their own data. It is suggested that the achievements of Savage-Rumbaugh’s non-human subjects suggest that language ability need not be explained by reference to specialised brain capacities. The contribution made by Clark’s work is to show the range of ways in which cognition exploits bodily and environmental resources. This model of ‘distributed’ cognition helps makes sense of the lexigram activity of Savage-Rumbaugh’s subjects, and points to a re-evaluation of the language behaviour of humans

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    Metacognition and Reflection by Interdisciplinary Experts: Insights from Cognitive Science and Philosophy

    Get PDF
    Interdisciplinary understanding requires integration of insights from different perspectives, yet it appears questionable whether disciplinary experts are well prepared for this. Indeed, psychological and cognitive scientific studies suggest that expertise can be disadvantageous because experts are often more biased than non-experts, for example, or fixed on certain approaches, and less flexible in novel situations or situations outside their domain of expertise. An explanation is that experts’ conscious and unconscious cognition and behavior depend upon their learning and acquisition of a set of mental representations or knowledge structures. Compared to beginners in a field, experts have assembled a much larger set of representations that are also more complex, facilitating fast and adequate perception in responding to relevant situations. This article argues how metacognition should be employed in order to mitigate such disadvantages of expertise: By metacognitively monitoring and regulating their own cognitive processes and representations, experts can prepare themselves for interdisciplinary understanding. Interdisciplinary collaboration is further facilitated by team metacognition about the team, tasks, process, goals, and representations developed in the team. Drawing attention to the need for metacognition, the article explains how philosophical reflection on the assumptions involved in different disciplinary perspectives must also be considered in a process complementary to metacognition and not completely overlapping with it. (Disciplinary assumptions are here understood as determining and constraining how the complex mental representations of experts are chunked and structured.) The article concludes with a brief reflection on how the process of Reflective Equilibrium should be added to the processes of metacognition and philosophical reflection in order for experts involved in interdisciplinary collaboration to reach a justifiable and coherent form of interdisciplinary integration. An Appendix of “Prompts or Questions for Metacognition” that can elicit metacognitive knowledge, monitoring, or regulation in individuals or teams is included at the end of the article

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    Stylistic Creativity in the Utilization of Management Tools

    Get PDF
    We analyze the role of management instruments in the development of collective activity and in the dynamics of organization, recurring to pragmatic and semiotic theories. In dualist representation-based theories (rationalism, cognitivism), instruments are seen as symbolic reflections of situations, which enable actors to translate their complex concrete activities into computable models. In interpretation-based theories (pragmatism, theory of activity, situated cognition), instruments are viewed as signs interpreted by actors to make sense of their collective activity, in an ongoing and situated manner. Instruments combine objective artefacts and interpretive schemes of utilization. They constrain interpretation and utilization, but do not completely determine them: they define genus (generic classes) of collective activity, but they leave space for individual or local interpretive schemes and stylistic creation in using them. A major part of organizational dynamics takes place in the permanent interplay between instrumental genus and styles. Whereas representation-based theories can be acceptable approximations in stable and reasonably simple organizational settings, interpretation-based theories make uncertain and complex situations more intelligible. They view emotions and creativity as a key part of the interpretive process, rather than as external biases of a rational modelling process. For future research, we wish to study how interpretation-based theories should impact managerial practices and improve, not only intelligibility, but also actionability of instruments and situations.Collective Activity; Genus; Instruments; Interpretation; Management Instruments; Performance Management; Pragmatism; Semiotics; Style

    Talking Nets: A Multi-Agent Connectionist Approach to Communication and Trust between Individuals

    Get PDF
    A multi-agent connectionist model is proposed that consists of a collection of individual recurrent networks that communicate with each other, and as such is a network of networks. The individual recurrent networks simulate the process of information uptake, integration and memorization within individual agents, while the communication of beliefs and opinions between agents is propagated along connections between the individual networks. A crucial aspect in belief updating based on information from other agents is the trust in the information provided. In the model, trust is determined by the consistency with the receiving agents’ existing beliefs, and results in changes of the connections between individual networks, called trust weights. Thus activation spreading and weight change between individual networks is analogous to standard connectionist processes, although trust weights take a specific function. Specifically, they lead to a selective propagation and thus filtering out of less reliable information, and they implement Grice’s (1975) maxims of quality and quantity in communication. The unique contribution of communicative mechanisms beyond intra-personal processing of individual networks was explored in simulations of key phenomena involving persuasive communication and polarization, lexical acquisition, spreading of stereotypes and rumors, and a lack of sharing unique information in group decisions
    • 

    corecore