126 research outputs found
Visual adaptation enhances action sound discrimination
Prolonged exposure, or adaptation, to a stimulus in one modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in one modality can bias perception in another modality. Here we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory or audiovisual hand actions enhanced discrimination between two subsequently presented hand action sounds. Discrimination was most enhanced when the visual action ‘matched’ the auditory action. In addition, prior adaptation to a visual, auditory or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation induced crossmodal enhancements cannot be explained by post-perceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli
The Nature Index: A General Framework for Synthesizing Knowledge on the State of Biodiversity
The magnitude and urgency of the biodiversity crisis is widely recognized within
scientific and political organizations. However, a lack of integrated measures
for biodiversity has greatly constrained the national and international response
to the biodiversity crisis. Thus, integrated biodiversity indexes will greatly
facilitate information transfer from science toward other areas of human
society. The Nature Index framework samples scientific information on
biodiversity from a variety of sources, synthesizes this information, and then
transmits it in a simplified form to environmental managers, policymakers, and
the public. The Nature Index optimizes information use by incorporating expert
judgment, monitoring-based estimates, and model-based estimates. The index
relies on a network of scientific experts, each of whom is responsible for one
or more biodiversity indicators. The resulting set of indicators is supposed to
represent the best available knowledge on the state of biodiversity and
ecosystems in any given area. The value of each indicator is scaled relative to
a reference state, i.e., a predicted value assessed by each expert for a
hypothetical undisturbed or sustainably managed ecosystem. Scaled indicator
values can be aggregated or disaggregated over different axes representing
spatiotemporal dimensions or thematic groups. A range of scaling models can be
applied to allow for different ways of interpreting the reference states, e.g.,
optimal situations or minimum sustainable levels. Statistical testing for
differences in space or time can be implemented using Monte-Carlo simulations.
This study presents the Nature Index framework and details its implementation in
Norway. The results suggest that the framework is a functional, efficient, and
pragmatic approach for gathering and synthesizing scientific knowledge on the
state of biodiversity in any marine or terrestrial ecosystem and has general
applicability worldwide
Cross-sectional analysis of nutrition and serum uric acid in two Caucasian cohorts: the AusDiab Study and the Tromsø study
Effect of Audiovisual Training on Monaural Spatial Hearing in Horizontal Plane
The article aims to test the hypothesis that audiovisual integration can improve spatial hearing in monaural conditions when interaural difference cues are not available. We trained one group of subjects with an audiovisual task, where a flash was presented in parallel with the sound and another group in an auditory task, where only sound from different spatial locations was presented. To check whether the observed audiovisual effect was similar to feedback, the third group was trained using the visual feedback paradigm. Training sessions were administered once per day, for 5 days. The performance level in each group was compared for auditory only stimulation on the first and the last day of practice. Improvement after audiovisual training was several times higher than after auditory practice. The group trained with visual feedback demonstrated a different effect of training with the improvement smaller than the group with audiovisual training. We conclude that cross-modal facilitation is highly important to improve spatial hearing in monaural conditions and may be applied to the rehabilitation of patients with unilateral deafness and after unilateral cochlear implantation
IFITM proteins inhibit HIV-1 protein synthesis
Interferon induced transmembrane proteins (IFITMs) inhibit the cellular entry of a broad range of viruses, but it has been suspected that for HIV-1 IFITMs may also inhibit a post-integration replicative step. We show that IFITM expression reduces HIV-1 viral protein synthesis by preferentially excluding viral mRNA transcripts from translation and thereby restricts viral production. Codon-optimization of proviral DNA rescues viral translation, implying that IFITM-mediated restriction requires recognition of viral RNA elements. In addition, we find that expression of the viral accessory protein Nef can help overcome the IFITM-mediated inhibition of virus production. Our studies identify a novel role for IFITMs in inhibiting HIV replication at the level of translation, but show that the effects can be overcome by the lentiviral protein Nef.Wellcome Trust-University of Edinburgh Institutional Strategic Support Fun
Malaria, anaemia and under-nutrition: three frequently co-existing conditions among preschool children in rural Rwanda
Electrical neuroimaging during auditory motion aftereffects reveals that auditory motion processing is motion sensitive but not direction selective
Item does not contain fulltex
- …