222 research outputs found

    Actin filament assembly by bacterial factors VopL/F: Which end is up?

    Get PDF
    Competing models have been proposed for actin filament nucleation by the bacterial proteins VopL/F. In this issue, Burke et al. (2017. J. Cell Biol. https://doi.org/10.1083/jcb.201608104) use direct observation to demonstrate that VopL/F bind the barbed and pointed ends of actin filaments but only nucleate new filaments from the pointed end

    Music as a scaffold for listening to speech: Better neural phase-locking to song than speech

    Get PDF
    © 2020 The Authors Neural activity synchronizes with the rhythmic input of many environmental signals, but the capacity of neural activity to entrain to the slow rhythms of speech is particularly important for successful communication. Compared to speech, song has greater rhythmic regularity, a more stable fundamental frequency, discrete pitch movements, and a metrical structure, this may provide a temporal framework that helps listeners neurally track information better than the rhythmically irregular rhythms of speech. The current study used EEG to examine whether entrainment to the syllable rate of linguistic utterances, as indexed by cerebro-acoustic phase coherence, was greater when listeners heard sung than spoken sentences. We assessed listeners phase-locking in both easy (no time compression) and hard (50% time-compression) utterance conditions. Adults phase-locked equally well to speech and song in the easy listening condition. However, in the time-compressed condition, phase-locking was greater for sung than spoken utterances in the theta band (3.67–5 Hz). Thus, the musical temporal and spectral characteristics of song related to better phase-locking to the slow phrasal and syllable information (4–7 Hz) in the speech stream. These results highlight the possibility of using song as a tool for improving speech processing in individuals with language processing deficits, such as dyslexia

    Building Categories to Guide Behavior: How Humans Build and Use Auditory Category Knowledge Throughout the Lifespan

    Full text link
    Although categorization has been studied in depth throughout development in the visual domain (e.g., Gelman & Meyer, 2011; Sloutsky 2010), there is little evidence examining how children and adults categorize everyday auditory objects (e.g., dog barks, trains, song, speech) or how category knowledge affects the way children and adults listen to these sounds during development. In two separate studies, I examined how listeners of all ages differentiated the multidimensional acoustic categories of speech and song and I determined whether listeners used category knowledge to process the sounds they encounter every day. In Experiment 1, listeners of all ages were able to categorize speech and song and categorization ability increased with age. Four- and 6-year-olds were more susceptible to the musical acoustic characteristics of ambiguous speech excerpts than 8-year-olds and adults, but all ages relied on F0 stability and average syllable duration to differentiate speech and song. Finally, 4-year-olds that were better at categorizing speech and song also had higher vocabulary scores, providing some of the first evidence that the ability to categorize speech and song may have cascading benefits for language development. Experiment 2 demonstrated the first evidence that listeners of all ages have change deafness. However, change deafness did not differ with age, even though overall sensitivity for detecting changes increased with age. Children and adults had more error for within-category changes compared to small acoustic changes, suggesting that all ages relied heavily on semantic category knowledge when detecting changes in complex scenes. These studies highlight the different roles that acoustic and semantic factors play when listeners are categorizing sounds compared to when they are using their knowledge to process sounds in complex scenes

    The Role of Music-Specific Representations When Processing Speech: Using a Musical Illusion to Elucidate Domain-Specific and -General Processes

    Full text link
    When listening to music and language sounds, it is unclear whether adults recruit domain-specific or domain-general mechanisms to make sense of incoming sounds. Unique acoustic characteristics such as a greater reliance on rapid temporal transitions in speech relative to song may introduce misleading interpretations concerning shared and overlapping processes in the brain. By using a stimulus that is both ecologically valid and can be perceived as speech or song depending on context, the contribution of low- and high-level mechanisms may be teased apart. The stimuli employed in all experiments are auditory illusions from speech to song reported by Deutsch et al. (2003, 2011) and Tierney et al. (2012). The current experiments found that 1) non-musicians also perceive the speech-to-song illusion and experience a similar disruption of the transformation as a result of pitch transpositions. 2) The contribution of rhythmic regularity to the perceptual transformation from speech to song is unclear using several different examples of the auditory illusion, and clear order effects occur because of the within-subjects design. And finally, 3) when comparing pitch change sensitivity in a speech mode of listening and, after several repetitions, a song mode of listening, only a song mode indicated the recruitment of music-specific representations. Together these studies indicate the potential for using the auditory illusion from speech to song in future research. Also, the final experiment tentatively demonstrates a behavioral dissociation between the recruitment of mechanisms unique to musical knowledge and mechanisms unique to the processing acoustic characteristics predominant in speech or song because acoustic characteristics were held constant

    Linking prenatal experience to the emerging musical mind

    Get PDF
    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one’s culture begins already within the mother’s womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind

    Supply Chain Greenhouse Gas Management Strategy for Ford Motor Company

    Full text link
    The processing of raw materials and the manufacturing of components for the automotive supply chain results in significant life cycle energy consumption and greenhouse gas (GHG) emissions. As a result, automobile manufacturers face potential financial risks from their supply chain operations in the form of energy price volatility and regulatory actions to curb climate change. To understand and address this challenge, Ford Motor Company (Ford) and the University of Michigan School of Natural Resources and Environment student team (team) developed a strategy for managing greenhouse gas emissions in the vehicle supply chain. Since December 2008, the team has supported the engagement of suppliers through the development and administration of a survey to collect allocated greenhouse gas data and environmental management practices information. The student team also advanced industry-wide participation through collaboration with the Automotive Industry Action Group (AIAG) to standardize greenhouse gas reporting requests provided to suppliers. Additionally, the student team evaluated public reporting options, specifically by engaging Ford as tester of the new Corporate Value Chain (Scope 3) Accounting and Reporting Standard drafted by the World Resources Institute and the World Business Council for Sustainable Development. The project findings illustrate a wide range in the sophistication of the greenhouse gas management practices of suppliers and demonstrate the need for a collaborative approach between suppliers and original equipment manufacturers (OEMs) to further emissions reduction efforts. The different components of the master’s project have informed short-, mid-, and long-term recommendations for the measurement, management, and reporting of supply chain greenhouse gas emissions by Ford. Specifically, the student team recommends that Ford (1) expand their data collection program, (2) refine and use the proposed Maturity Matrix tool to measure supplier performance, (3) collaborate with suppliers on the improvement of management efforts, and (4) continue to support and pursue an industry-wide approach to greenhouse gas management through involvement with AIAG.Master of ScienceNatural Resources and EnvironmentUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/83506/1/FordCarbon_SNREMastersProject_FinalReport.pd

    Familiarity modulates neural tracking of sung and spoken utterances

    Get PDF
    Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant\u27s subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants’ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context

    Active Learning in a Computational Model ofWord Learning

    Get PDF
    • …
    corecore