19 research outputs found

    FORUM:Remote testing for psychological and physiological acoustics

    Get PDF
    Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice

    A Dedicated Promoter Drives Constitutive Expression of the Cell-Autonomous Immune Resistance GTPase, Irga6 (IIGP1) in Mouse Liver

    Get PDF
    Background: In general, immune effector molecules are induced by infection. Methodology and Principal Findings: However, strong constitutive expression of the cell-autonomous resistance GTPase, Irga6 (IIGP1), was found in mouse liver, contrasting with previous evidence that expression of this protein is exclusively dependent on induction by IFNc. Constitutive and IFNc-inducible expression of Irga6 in the liver were shown to be dependent on transcription initiated from two independent untranslated 59 exons, which splice alternatively into the long exon encoding the full-length protein sequence. Irga6 is expressed constitutively in freshly isolated hepatocytes and is competent in these cells to accumulate on the parasitophorous vacuole membrane of infecting Toxoplasma gondii tachyzoites. Conclusions and Significance: The role of constitutive hepatocyte expression of Irga6 in resistance to parasites invading from the gut via the hepatic portal system is discussed

    Loss of the interferon-γ-inducible regulatory immunity-related GTPase (IRG), Irgm1, causes activation of effector IRG proteins on lysosomes, damaging lysosomal function and predicting the dramatic susceptibility of Irgm1-deficient mice to infection

    Get PDF
    The interferon-γ (IFN-γ)-inducible immunity-related GTPase (IRG), Irgm1, plays an essential role in restraining activation of the IRG pathogen resistance system. However, the loss of Irgm1 in mice also causes a dramatic but unexplained susceptibility phenotype upon infection with a variety of pathogens, including many not normally controlled by the IRG system. This phenotype is associated with lymphopenia, hemopoietic collapse, and death of the mouse.Deutscher Akademischer Austausch Dienst (DAAD); International Graduate School in Development Health and Disease (IGS-DHD); Deutsche For-schungsgemeinschaft (SFBs 635, 670, 680); Max-Planck-Gesellschaft (Max Planck Fellowship)

    An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Get PDF
    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex

    An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex

    Get PDF
    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex

    Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus

    Get PDF
    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities

    Group map illustrating regions significantly activated in the Audiovisual > Auditory-Speech Only contrast.

    No full text
    <p>Group activation map (N=18, false discovery rate q <0.05) overlaid on a surface-rendered template brain.</p
    corecore