1,947 research outputs found

    Effect of broodstock holding environment on egg quality in farmed brown trout (Salmo trutta)

    Get PDF
    Brown trout (Salmo trutta) broodstock from a single population were separated prior to spawning and exposed to two different holding environments: a ‘raceway system’ and a ‘tank system’. Eggs were stripped from females and 13 measures of egg quality were collected, analysed individually, combined by principle components analysis into an integrated egg quality score which was validated against egg survival. The multivariate egg quality score (PC1) differed for fish held in the tank and raceway systems. Egg survival, chorion breaking strength and chorion Se concentrations were higher in eggs produced by broodstock held in the tank system compared to those in the raceway system. In contrast, chorion concentrations of P and K were higher in eggs from fish held in the raceway system. The results suggest that brown trout broodstock reared in tank systems produced higher quality eggs compared to trout reared in raceways. Finally, this study also indicates that multivariate statistical analysis can be used to determine egg quality from multiple egg parameters

    Human gustation: when the brain has taste.

    Get PDF
    What we put into our mouths can nourish or kill us. A new study uses state-of-the-art electroencephalogram decoding to detail how we and our brains know what we taste

    Multisensory context portends object memory.

    Get PDF
    Multisensory processes facilitate perception of currently-presented stimuli and can likewise enhance later object recognition. Memories for objects originally encountered in a multisensory context can be more robust than those for objects encountered in an exclusively visual or auditory context [1], upturning the assumption that memory performance is best when encoding and recognition contexts remain constant [2]. Here, we used event-related potentials (ERPs) to provide the first evidence for direct links between multisensory brain activity at one point in time and subsequent object discrimination abilities. Across two experiments we found that individuals showing a benefit and those impaired during later object discrimination could be predicted by their brain responses to multisensory stimuli upon their initial encounter. These effects were observed despite the multisensory information being meaningless, task-irrelevant, and presented only once. We provide critical insights into the advantages associated with multisensory interactions; they are not limited to the processing of current stimuli, but likewise encompass the ability to determine the benefit of one's memories for object recognition in later, unisensory contexts

    Single-trial multisensory memories affect later auditory and visual object discrimination.

    Get PDF
    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand

    Neuroplasticity: Unexpected Consequences of Early Blindness.

    Get PDF
    A pair of recent studies shows that congenital blindness can have significant consequences for the functioning of the visual system after sight restoration, particularly if that restoration is delayed

    The context-contingent nature of cross-modal activations of the visual cortex

    Get PDF
    Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~250ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences

    Location-independent and location-linked representations of sound objects.

    Get PDF
    For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked

    Neural plasticity associated with recently versus often heard objects.

    Get PDF
    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures

    Does my brain want what my eyes like? - How food liking and choice influence spatio-temporal brain dynamics of food viewing.

    Get PDF
    How food valuation and decision-making influence the perception of food is of major interest to better understand food intake behavior and, by extension, body weight management. Our study investigated behavioral responses and spatio-temporal brain dynamics by means of visual evoked potentials (VEPs) in twenty-two normal-weight participants when viewing pairs of food photographs. Participants rated how much they liked each food item (valuation) and subsequently chose between the two alternative food images. Unsurprisingly, strongly liked foods were also chosen most often. Foods were rated faster as strongly liked than as mildly liked or disliked irrespective of whether they were subsequently chosen over an alternative. Moreover, strongly liked foods were subsequently also chosen faster than the less liked alternatives. Response times during valuation and choice were positively correlated, but only when foods were liked; the faster participants rated foods as strongly liked, the faster they were in choosing the food item over an alternative. VEP modulations by the level of liking attributed as well as the subsequent choice were found as early as 135-180ms after food image onset. Analyses of neural source activity patterns over this time interval revealed an interaction between liking and the subsequent choice within the insula, dorsal frontal and superior parietal regions. The neural responses to food viewing were found to be modulated by the attributed level of liking only when foods were chosen, not when they were dismissed for an alternative. Therein, the responses to disliked foods were generally greater than those to foods that were liked more. Moreover, the responses to disliked but chosen foods were greater than responses to disliked foods which were subsequently dismissed for an alternative offer. Our findings show that the spatio-temporal brain dynamics to food viewing are immediately influenced both by how much foods are liked and by choices taken on them. These valuation and choice processes are subserved by brain regions involved in salience and reward attribution as well as in decision-making processes, which are likely to influence prospective dietary choices in everyday life

    Multisensory Processes: A Balancing Act across the Lifespan.

    Get PDF
    Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales
    corecore