7,409 research outputs found

    An interoceptive predictive coding model of conscious presence

    Get PDF
    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness

    Anatomy and computational modeling of networks underlying cognitive-emotional interaction

    Get PDF
    The classical dichotomy between cognition and emotion equated the first with rationality or logic and the second with irrational behaviors. The idea that cognition and emotion are separable, antagonistic forces competing for dominance of mind has been hard to displace despite abundant evidence to the contrary. For instance, it is now known that a pathological absence of emotion leads to profound impairment of decision making. Behavioral observations of this kind are corroborated at the mechanistic level: neuroanatomical studies reveal that brain areas typically described as underlying either cognitive or emotional processes are linked in ways that imply complex interactions that do not resemble a simple mutual antagonism. Instead, physiological studies and network simulations suggest that top-down signals from prefrontal cortex realize "cognitive control" in part by either suppressing or promoting emotional responses controlled by the amygdala, in a way that facilitates adaptation to changing task demands. Behavioral, anatomical, and physiological data suggest that emotion and cognition are equal partners in enabling a continuum or matrix of flexible behaviors that are subserved by multiple brain regions acting in concert. Here we focus on neuroanatomical data that highlight circuitry that structures cognitive-emotional interactions by directly or indirectly linking prefrontal areas with the amygdala. We also present an initial computational circuit model, based on anatomical, physiological, and behavioral data to explicitly frame the learning and performance mechanisms by which cognition and emotion interact to achieve flexible behavior.R01 MH057414 - NIMH NIH HHS; R01 NS024760 - NINDS NIH HH

    Feedback information and the reward positivity

    No full text
    The reward positivity is a component of the event-related brain potential (ERP) sensitive to neural mechanisms of reward processing. Multiple studies have demonstrated that reward positivity amplitude indices a reward prediction error signal that is fundamental to theories of reinforcement learning. However, whether this ERP component is also sensitive to richer forms of performance information important for supervised learning is less clear. To investigate this question, we recorded the electroencephalogram from participants engaged in a time estimation task in which the type of error information conveyed by feedback stimuli was systematically varied across conditions. Consistent with our predictions, we found that reward positivity amplitude decreased in relation to increasing information content of the feedback, and that reward positivity amplitude was unrelated to trial-to-trial behavioral adjustments in task performance. By contrast, a series of exploratory analyses revealed frontal-central and posterior ERP components immediately following the reward positivity that related to these processes. Taken in the context of the wider literature, these results suggest that the reward positivity is produced by a neural mechanism that motivates task performance, whereas the later ERP components apply the feedback information according to principles of supervised learning

    Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements

    Full text link
    Emotion evoked by an advertisement plays a key role in influencing brand recall and eventual consumer choices. Automatic ad affect recognition has several useful applications. However, the use of content-based feature representations does not give insights into how affect is modulated by aspects such as the ad scene setting, salient object attributes and their interactions. Neither do such approaches inform us on how humans prioritize visual information for ad understanding. Our work addresses these lacunae by decomposing video content into detected objects, coarse scene structure, object statistics and actively attended objects identified via eye-gaze. We measure the importance of each of these information channels by systematically incorporating related information into ad affect prediction models. Contrary to the popular notion that ad affect hinges on the narrative and the clever use of linguistic and social cues, we find that actively attended objects and the coarse scene structure better encode affective information as compared to individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International Conference on Multimodal Interaction, Boulder, CO, US

    Edge-centric Optimization of Multi-modal ML-driven eHealth Applications

    Full text link
    Smart eHealth applications deliver personalized and preventive digital healthcare services to clients through remote sensing, continuous monitoring, and data analytics. Smart eHealth applications sense input data from multiple modalities, transmit the data to edge and/or cloud nodes, and process the data with compute intensive machine learning (ML) algorithms. Run-time variations with continuous stream of noisy input data, unreliable network connection, computational requirements of ML algorithms, and choice of compute placement among sensor-edge-cloud layers affect the efficiency of ML-driven eHealth applications. In this chapter, we present edge-centric techniques for optimized compute placement, exploration of accuracy-performance trade-offs, and cross-layered sense-compute co-optimization for ML-driven eHealth applications. We demonstrate the practical use cases of smart eHealth applications in everyday settings, through a sensor-edge-cloud framework for an objective pain assessment case study
    corecore