350 research outputs found

    A common neural scale for the subjective pleasantness of different primary rewards.

    Get PDF
    When an economic decision is taken, it is between goals with different values, and the values must be on the same scale. Here, we used functional MRI to search for a brain region that represents the subjective pleasantness of two different rewards on the same neural scale. We found activity in the ventral prefrontal cortex that correlated with the subjective pleasantness of two fundamentally different rewards, taste in the mouth and warmth on the hand. The evidence came from two different investigations, a between-group comparison of two independent fMRI studies, and from a within-subject study. In the latter, we showed that neural activity in the same voxels in the ventral prefrontal cortex correlated with the subjective pleasantness of the different rewards. Moreover, the slope and intercept for the regression lines describing the relationship between activations and subjective pleasantness were highly similar for the different rewards. We also provide evidence that the activations did not simply represent multisensory integration or the salience of the rewards. The findings demonstrate the existence of a specific region in the human brain where neural activity scales with the subjective pleasantness of qualitatively different primary rewards. This suggests a principle of brain processing of importance in reward valuation and decision-making

    How the brain represents the reward value of fat in the mouth.

    Get PDF
    The palatability and pleasantness of the sensory properties of foods drive food selection and intake and may contribute to overeating and obesity. Oral fat texture can make food palatable and pleasant. To analyze its neural basis, we correlated humans’ subjective reports of the pleasantness of the texture and flavor of a high- and low-fat food with a vanilla or strawberry flavor, with neural activations measured with functional magnetic resonance imaging. Activity in the midorbitofrontal and anterior cingulate cortex was correlated with the pleasantness of oral fat texture and in nearby locations with the pleasantness of flavor. The pregenual cingulate cortex showed a supralinear response to the combination of high fat and pleasant, sweet flavor, implicating it in the convergence of fat texture and flavor to produce a representation of highly pleasant stimuli. The subjective reports of oral fattiness were correlated with activations in the midorbitofrontal cortex and ventral striatum. The lateral hypothalamus and amygdala were more strongly activated by high- versus low-fat stimuli. This discovery of which brain regions track the subjective hedonic experience of fat texture will help to unravel possible differences in the neural responses in obese versus lean people to oral fat, a driver of food intake

    Human midcingulate cortex encodes distributed representations of task progress

    Get PDF
    The function of midcingulate cortex (MCC) remains elusive despite decades of investigation and debate. Complicating matters, individual MCC neurons respond to highly diverse task-related events, and MCC activation is reported in most human neuroimaging studies employing a wide variety of task manipulations. Here we investigate this issue by applying a model-based cognitive neuroscience approach involving neural network simulations, functional magnetic resonance imaging, and representational similarity analysis. We demonstrate that human MCC encodes distributed, dynamically evolving representations of extended, goal-directed action sequences. These representations are uniquely sensitive to the stage and identity of each sequence, indicating that MCC sustains contextual information necessary for discriminating between task states. These results suggest that standard univariate approaches for analyzing MCC function overlook the major portion of task-related information encoded by this brain area and point to promising new avenues for investigation

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    Novel Random Forest Methods and Algorithms for Autism Spectrum Disorders Research

    Get PDF
    Random Forest (RF) is a flexible, easy to use machine learning algorithm that was proposed by Leo Breiman in 2001 for building a predictor ensemble with a set of decision trees that grow in randomly selected subspaces of data. Its superior prediction accuracy has made it the most used algorithms in the machine learning field. In this dissertation, we use the random forest as the main building block for creating a proximity matrix for multivariate matching and diagnostic classification problems that are used for autism research (as an exemplary application). In observational studies, matching is used to optimize the balance between treatment groups. Although many matching algorithms can achieve this goal, in some fields, matching could face its own challenges. Datasets with small sample sizes and limited control reservoirs are prone to this issue. This problem may apply to many ongoing research fields, such as autism spectrum disorder (ASD). We are interested in eliminating the effect of undesirable variables using two types of algorithms, 1:k nearest matching, and full matching. Therefore, we first introduced three different types of 1:k nearest matching algorithms and two full matching based methods to compare group-wise matching vs. pairwise matching for creating an optimal balance and sample size. These proposed methods were applied to a data set from the Brain Development Imaging Lab (BDIL) at San Diego State University. Next, we introduce the iterMatch R package. This package finds a 1:1 matched subsample of the data that is balanced on all matching variables while incorporating missing values in an iterative manner. Missing variables in dataset need to be imputed or only complete cases can be considered in matching. Losing data because of the limitations in a matching algorithm can decrease the power of the study as well as omit important information. Other than introducing the iterMatch package, tuning the input parameters of this package is discussed, using medium and large datasets from the Autism Brain Imaging Data Exchange (ABIDE). We then propose two mixed-effects random forest-based classification algorithms applicable to multi-site (clustered data) using resting-state fMRI (rs-fMRI) and structural MRI (sMRI). These algorithms control the random effects of the confounding factor of the site and fixed-effect of phenotype variable of age internally while building the prediction model. On top of controlling the effects of confounding variables, these algorithms take away the necessity of utilizing a separate dimension reduction algorithm for high dimensional data such as functional connectivity in a non-linear fashion. We show the proposed algorithms can achieve prediction accuracy over 80 percent using test data

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics
    • …
    corecore