168 research outputs found
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a βraceβ model failed to account for their behavior patterns. Conversely, a βsuperposition modelβ, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates
Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis
Subspace Projection Approaches to Classification and Visualization of Neural Network-Level Encoding Patterns
Recent advances in large-scale ensemble recordings allow monitoring of activity patterns of several hundreds of neurons in freely behaving animals. The emergence of such high-dimensional datasets poses challenges for the identification and analysis of dynamical network patterns. While several types of multivariate statistical methods have been used for integrating responses from multiple neurons, their effectiveness in pattern classification and predictive power has not been compared in a direct and systematic manner. Here we systematically employed a series of projection methods, such as Multiple Discriminant Analysis (MDA), Principal Components Analysis (PCA) and Artificial Neural Networks (ANN), and compared them with non-projection multivariate statistical methods such as Multivariate Gaussian Distributions (MGD). Our analyses of hippocampal data recorded during episodic memory events and cortical data simulated during face perception or arm movements illustrate how low-dimensional encoding subspaces can reveal the existence of network-level ensemble representations. We show how the use of regularization methods can prevent these statistical methods from over-fitting of training data sets when the trial numbers are much smaller than the number of recorded units. Moreover, we investigated the extent to which the computations implemented by the projection methods reflect the underlying hierarchical properties of the neural populations. Based on their ability to extract the essential features for pattern classification, we conclude that the typical performance ranking of these methods on under-sampled neural data of large dimension is MDA>PCA>ANN>MGD
Millisecond-Timescale Local Network Coding in the Rat Primary Somatosensory Cortex
Correlation among neocortical neurons is thought to play an indispensable role in mediating sensory processing of external stimuli. The role of temporal precision in this correlation has been hypothesized to enhance information flow along sensory pathways. Its role in mediating the integration of information at the output of these pathways, however, remains poorly understood. Here, we examined spike timing correlation between simultaneously recorded layer V neurons within and across columns of the primary somatosensory cortex of anesthetized rats during unilateral whisker stimulation. We used Bayesian statistics and information theory to quantify the causal influence between the recorded cells with millisecond precision. For each stimulated whisker, we inferred stable, whisker-specific, dynamic Bayesian networks over many repeated trials, with network similarity of 83.3Β±6% within whisker, compared to only 50.3Β±18% across whiskers. These networks further provided information about whisker identity that was approximately 6 times higher than what was provided by the latency to first spike and 13 times higher than what was provided by the spike count of individual neurons examined separately. Furthermore, prediction of individual neurons' precise firing conditioned on knowledge of putative pre-synaptic cell firing was 3 times higher than predictions conditioned on stimulus onset alone. Taken together, these results suggest the presence of a temporally precise network coding mechanism that integrates information across neighboring columns within layer V about vibrissa position and whisking kinetics to mediate whisker movement by motor areas innervated by layer V
The Impact of Spatial Incongruence on an Auditory-Visual Illusion
The sound-induced flash illusion is an auditory-visual illusion--when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.status: publishe
Kidney transplantation and patients who decline SARS-CoV-2 vaccination: an ethical framework.
As SARS-CoV-2 vaccines have started to be rolled out, a key question facing transplant units has been whether listing for transplantation should be contingent on recipients having received a vaccine. We aimed to provide an ethical framework when considering potential transplant candidates who decline vaccination. We convened a working group comprising transplant professionals, lay members and patients and undertook a literature review and consensus process. This group's work was also informed by discussions in two hospital clinical ethics committees. We have reviewed arguments for and against mandating vaccination prior to listing for kidney transplantation and considered some practical difficulties which may be associated with a policy of mandated vaccination. Rather than requiring that all patients must receive the SARS-CoV-2 vaccine prior to transplant listing, we recommend considering vaccination status as one of a number of SARS-CoV-2-related risk factors in relation to transplant listing. Transplant units should engage in individualised risk-benefit discussions with patients, avoid the language of mandated treatments and strongly encourage uptake of the vaccine in all patient groups, using tailor-made educational initiatives
Neuronal Plasticity and Multisensory Integration in Filial Imprinting
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naΓ―ve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus
- β¦