1,717 research outputs found

    The nature of the animacy organization in human ventral temporal cortex

    Full text link
    The principles underlying the animacy organization of the ventral temporal cortex (VTC) remain hotly debated, with recent evidence pointing to an animacy continuum rather than a dichotomy. What drives this continuum? According to the visual categorization hypothesis, the continuum reflects the degree to which animals contain animal-diagnostic features. By contrast, the agency hypothesis posits that the continuum reflects the degree to which animals are perceived as (social) agents. Here, we tested both hypotheses with a stimulus set in which visual categorizability and agency were dissociated based on representations in convolutional neural networks and behavioral experiments. Using fMRI, we found that visual categorizability and agency explained independent components of the animacy continuum in VTC. Modeled together, they fully explained the animacy continuum. Finally, clusters explained by visual categorizability were localized posterior to clusters explained by agency. These results show that multiple organizing principles, including agency, underlie the animacy continuum in VTC.Comment: 16 pages, 5 figures, code+data at - https://doi.org/10.17605/OSF.IO/VXWG9 Update - added supplementary results and edited abstrac

    Action in Mind: Neural Models for Action and Intention Perception

    Get PDF
    To notice, recognize, and ultimately perceive the others’ actions and to discern the intention behind those observed actions is an essential skill for social communications and improves markedly the chances of survival. Encountering dangerous behavior, for instance, from a person or an animal requires an immediate and suitable reaction. In addition, as social creatures, we need to perceive, interpret, and judge correctly the other individual’s actions as a fundamental skill for our social life. In other words, our survival and success in adaptive social behavior and nonverbal communication depends heavily on our ability to thrive in complex social situations. However, it has been shown that humans spontaneously can decode animacy and social interactions even from strongly impoverished stimuli and this is a fundamental part of human experience that develops early in infancy and is shared with other primates. In addition, it is well established that perceptual and motor representations of actions are tightly coupled and both share common mechanisms. This coupling between action perception and action execution plays a critical role in action understanding as postulated in various studies and they are potentially important for our social cognition. This interaction likely is mediated by action-selective neurons in the superior temporal sulcus (STS), premotor and parietal cortex. STS and TPJ have been identified also as coarse neural substrate for the processing of social interactions stimuli. Despite this localization, the underlying exact neural circuits of this processing remain unclear. The aim of this thesis is to understand the neural mechanisms behind the action perception coupling and to investigate further how human brain perceive different classes of social interactions. To achieve this goal, first we introduce a neural model that provides a unifying account for multiple experiments on the interaction between action execution and action perception. The model reproduces correctly the interactions between action observation and execution in several experiments and provides a link towards electrophysiological detailed models of relevant circuits. This model might thus provide a starting point for the detailed quantitative investigation how motor plans interact with perceptual action representations at the level of single-cell mechanisms. Second we present a simple neural model that reproduces some of the key observations in psychophysical experiments about the perception of animacy and social interactions from stimuli. Even in its simple form the model proves that animacy and social interaction judgments partly might be derived by very elementary operations in hierarchical neural vision systems, without a need of sophisticated or accurate probabilistic inference

    Impaired identification of impoverished animate but not inanimate objects in adults with high-functioning autism spectrum disorder

    Get PDF
    The ability to identify animate and inanimate objects from impoverished images was investigated in adults with high-functioning autism spectrum disorder (HFA) and in matched typically developed (TD) adults, using a newly developed task. Consecutive frames were presented containing Gabor elements that slightly changed orientation from one frame to the next. For a subset of elements, the changes were such that these elements gradually formed the outline of an object. Elements enclosed within the object's outline gradually adopted one and the same orientation, outside elements adopted random orientations. The subjective experience was that of an object appearing out of a fog. The HFA group required significantly more frames to identify the impoverished objects than the TD group. Crucially, this difference depended on the nature of the objects: the HFA group required significantly more frames to identify animate objects, but with respect to the identification of inanimate objects the groups did not differ. The groups also did not differ with respect to the number and type of incorrect guesses they made. The results suggest a specific impairment in individuals with HFA in identifying animate objects. A number of possible explanations are discussed

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    How instructions modify perception: An fMRI study investigating brain areas involved in attributing human agency

    Get PDF
    Behavioural studies suggest that the processing of movement stimuli is influenced by beliefs about the agency behind these actions. The current study examined how activity in social and action related brain areas differs when participants were instructed that identicalmovement stimuli were either human or computer generated.Participants viewed a series of point-light animation figures derived frommotion-capture recordings of amoving actor, while functional magnetic resonance imaging (fMRI) was used to monitor patterns of neural activity. The stimuli were scrambled to produce a range of stimulus realism categories; furthermore, before each trial participants were told that they were about to view either a recording of human movement or a computersimulated pattern of movement. Behavioural results suggested that agency instructions influenced participants' perceptions of the stimuli. The fMRI analysis indicated different functions within the paracingulate cortex: ventral paracingulate cortex was more active for human compared to computer agency instructed trials across all stimulus types, whereas dorsal paracingulate cortex was activated more highly in conflicting conditions (human instruction, lowrealismor vice versa). These findings support the hypothesis that ventral paracingulate encodes stimuli deemed to be of human origin,whereas dorsal paracingulate cortex is involvedmore in the ascertainment of human or intentional agency during the observation of ambiguous stimuli. Our results highlight the importance of prior instructions or beliefs on movement processing and the role of the paracingulate cortex in integrating prior knowledge with bottom-up stimuli

    Action perception is intact in autism spectrum disorder

    Get PDF
    Date of Acceptance:10/11/2014. Copyright © 2015 the authors 0270-6474/15/351849-09$15.00/0. Copyright of all material published in The Journal of Neuroscience remains with the authors. The authors grant the Society for Neuroscience an exclusive license to publish their work for the first 6 months. After 6 months the work becomes available to the public to copy, distribute, or display under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.Peer reviewedPublisher PD

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    The Agent Preference in Visual Event Apprehension

    Full text link
    A central aspect of human experience and communication is understanding events in terms of agent (“doer”) and patient (“undergoer” of action) roles. These event roles are rooted in general cognition and prominently encoded in language, with agents appearing as more salient and preferred over patients. An unresolved question is whether this preference for agents already operates during apprehension, that is, the earliest stage of event processing, and if so, whether the effect persists across different animacy configurations and task demands. Here we contrast event apprehension in two tasks and two languages that encode agents differently; Basque, a language that explicitly case-marks agents (‘ergative’), and Spanish, which does not mark agents. In two brief exposure experiments, native Basqueand Spanish speakers saw pictures for only 300 ms, and subsequently described them or answered probe questions about them. We compared eye fixations and behavioral correlates of event role extraction with Bayesian regression. Agents received more attention and were recognized better across languages and tasks. At the same time, language and task demands affected the attention to agents. Our findings show that a general preference for agents exists in event apprehension, but it can be modulated by task and language demands

    Naturalistic stimuli reveal a dominant role for agentic action in visual representation

    Get PDF
    Abstract Naturalistic, dynamic movies evoke strong, consistent, and information-rich patterns of activity over a broad expanse of cortex and engage multiple perceptual and cognitive systems in parallel. The use of naturalistic stimuli enables functional brain imaging research to explore cognitive domains that are poorly sampled in highly-controlled experiments. These domains include perception and understanding of agentic action, which plays a larger role in visual representation than was appreciated from experiments using static, controlled stimuli

    From automata to animate beings: the scope and limits of attributing socialness to artificial agents

    Get PDF
    Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory‐of‐Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long‐term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents
    • 

    corecore