234 research outputs found

    What Can Information Encapsulation Tell Us About Emotional Rationality?

    Get PDF
    What can features of cognitive architecture, e.g. the information encapsulation of certain emotion processing systems, tell us about emotional rationality? de Sousa proposes the following hypothesis: “the role of emotions is to supply the insufficiency of reason by imitating the encapsulation of perceptual modes” (de Sousa 1987: 195). Very roughly, emotion processing can sometimes occur in a way that is insensitive to what an agent already knows, and such processing can assist reasoning by restricting the response-options she considers. This paper aims to provide an exposition and assessment of de Sousa’s hypothesis. I argue information encapsulation is not essential to emotion-driven reasoning, as emotions can determine the relevance of response-options even without being encapsulated. However, I argue encapsulation can still play a role in assisting reasoning by restricting response-options more efficiently, and in a way that ensures which options emotions deem relevant are not overridden by what the agent knows. I end by briefly explaining why this very feature also helps explain how emotions can, on occasion, hinder reasoning

    Does modularity undermine the pro‐emotion consensus?

    Get PDF
    There is a growing consensus that emotions contribute positively to human practical rationality. While arguments that defend this position often appeal to the modularity of emotion-generation mechanisms, these arguments are also susceptible to the criticism, e.g. by Jones (2006), that emotional modularity supports pessimism about the prospects of emotions contributing positively to practical rationality here and now. This paper aims to respond to this criticism by demonstrating how models of emotion processing can accommodate the sorts of cognitive influence required to make the pro-emotion position plausible whilst exhibiting key elements of modularity

    Modularity and the predictive mind

    Get PDF
    Modular approaches to the architecture of the mind claim that some mental mechanisms, such as sensory input processes, operate in special-purpose subsystems that are functionally independent from the rest of the mind. This assumption of modularity seems to be in tension with recent claims that the mind has a predictive architecture. Predictive approaches propose that both sensory processing and higher-level processing are part of the same Bayesian information-processing hierarchy, with no clear boundary between perception and cognition. Furthermore, it is not clear how any part of the predictive architecture could be functionally independent, given that each level of the hierarchy is influenced by the level above. Both the assumption of continuity across the predictive architecture and the seeming non-isolability of parts of the predictive architecture seem to be at odds with the modular approach. I explore and ultimately reject the predictive approach’s apparent commitments to continuity and non-isolation. I argue that predictive architectures can be modular architectures, and that we should in fact expect predictive architectures to exhibit some form of modularity

    On the automaticity of language processing

    Get PDF
    People speak and listen to language all the time. Given this high frequency of use, it is often suggested that at least some aspects of language processing are highly overlearned and therefore occur “automatically”. Here we critically examine this suggestion. We first sketch a framework that views automaticity as a set of interrelated features of mental processes and a matter of degree rather than a single feature that is all-or-none. We then apply this framework to language processing. To do so, we carve up the processes involved in language use according to (a) whether language processing takes place in monologue or dialogue, (b) whether the individual is comprehending or producing language, (c) whether the spoken or written modality is used, and (d) the linguistic processing level at which they occur, that is, phonology, the lexicon, syntax, or conceptual processes. This exercise suggests that while conceptual processes are relatively non-automatic (as is usually assumed), there is also considerable evidence that syntactic and lexical lower-level processes are not fully automatic. We close by discussing entrenchment as a set of mechanisms underlying automatization

    Connectionist natural language parsing

    Get PDF
    The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers

    Literal Perceptual Inference

    Get PDF
    In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module
    • 

    corecore