117,880 research outputs found

    A core eating network and its modulations underlie diverse eating phenomena

    Get PDF
    We propose that a core eating network and its modulations account for much of what is currently known about the neural activity underlying a wide range of eating phenomena in humans (excluding homeostasis and related phenomena). The core eating network is closely adapted from a network that Kaye, Fudge, and Paulus (2009) proposed to explain the neurocircuitry of eating, including a ventral reward pathway and a dorsal control pathway. In a review across multiple literatures that focuses on experiments using functional Magnetic Resonance Imaging (fMRI), we first show that neural responses to food cues, such as food pictures, utilize the same core eating network as eating. Consistent with the theoretical perspective of grounded cognition, food cues activate eating simulations that produce reward predictions about a perceived food and potentially motivate its consumption. Reviewing additional literatures, we then illustrate how various factors modulate the core eating network, increasing and/or decreasing activity in subsets of its neural areas. These modulating factors include food significance (palatability, hunger), body mass index (BMI, overweight/obesity), eating disorders (anorexia nervosa, bulimia nervosa, binge eating), and various eating goals (losing weight, hedonic pleasure, healthy living). By viewing all these phenomena as modulating a core eating network, it becomes possible to understand how they are related to one another within this common theoretical framework. Finally, we discuss future directions for better establishing the core eating network, its modulations, and their implications for behavior

    Rapid Visual Categorization is not Guided by Early Salience-Based Selection

    Full text link
    The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.Comment: 22 pages, 9 figure

    Where is the length effect? A cross-linguistic study.

    Get PDF
    Many models of speech production assume that one cannot begin to articulate a word before all its segmental units are inserted into the articulatory plan. Moreover, some of these models assume that segments are serially inserted from left to right. As a consequence, latencies to name words should increase with word length. In a series of five experiments, however, we showed that the time to name a picture or retrieve a word associated with a symbol is not affected by the length of the word. Experiments 1 and 2 used French materials and participants, while Experiments 3, 4 and 5 were conducted with English materials and participants. These results are discussed in relation to current models of speech production, and previous reports of length effects are reevaluated in light of these findings. We conclude that if words are encoded serially, then articulation can start before an entire phonological word has been encoded

    Planning ahead: How recent experience with structures and words changes the scope of linguistic planning

    Get PDF
    The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation

    Stimulus-specific mechanisms of visual short-term memory

    Get PDF
    The retention of spatial information in visual short-term memory was assessed by measuring spatial frequency discrimination thresholds with a two-interval forced-choice task varying the time interval between the two gratings to be compared. The memory of spatial frequency information was perfect across 10-sec interstimulus intervals. Presentation of a “memory masker” grating during the interstimulus interval may interfere with short-term memory. This interference depends on the relative spatial frequency of the test and masker gratings, with maximum interference at spatial frequency differences of 1–1.5 octaves and beyond. This range of interference with short-term memory is comparable to the bandwidth of sensory masking or adaptation. A change of the relative orientation of test and masker gratings does not produce interference with spatial frequency discrimination thresholds. These results suggest stimulus-specific interactions at higher-level representations of visual form

    The distractor frequency effect in picture–word interference: evidence for response exclusion

    Get PDF
    In 3 experiments, subjects named pictures with low- or high-frequency superimposed distractor words. In a 1st experiment, we replicated the finding that low-frequency words induce more interference in picture naming than high-frequency words (i.e., distractor frequency effect; Miozzo & Caramazza, 2003). According to the response exclusion hypothesis, this effect has its origin at a postlexical stage and is related to a response buffer. The account predicts that the distractor frequency effect should only be present when a response to the word enters the response buffer. This was tested by masking the distractor (Experiment 2) and by presenting it at various time points before stimulus onset (Experiment 3). Results supported the hypothesis by showing that the effect was only present when distractors were visible, and if they were presented in close proximity to the target picture. These results have implications for the models of lexical access and for the tasks that can be used to study this process
    corecore