290 research outputs found

    Simulated nature walks improve psychological well-being along a natural to urban continuum

    Get PDF
    Compared to urban environments, interactions with natural environments have been associated with several health benefits including psychological restoration and improved emotional well-being. However, dichotomizing environments as either natural or urban may emphasize between-category differences and minimize potentially important within-category variation (e.g., forests versus fields of crops; neighborhoods versus city centers). Therefore, the current experiment assessed how viewing brief videos of different environments, ranging along a continuum from stereotypically natural to stereotypically urban, influenced subjective ratings of mood, restoration, and well-being. Participants (n = 202) were randomly assigned to one of four video conditions, which depicted a simulated walk through a pine forest, a farmed field, a tree-lined urban neighborhood, or a bustling city center. Immediately before and after the videos, participants rated their current emotional states. Participants additionally rated the perceived restorativeness of the video. The results supported the idea that the virtual walks differentially influenced affect and perceived restoration, even when belonging to the same nominal category of natural or urban. The pine forest walk significantly improved happiness relative to both urban walks, whereas the farmed field walk did not. The bustling city center walk decreased feelings of calmness compared to all other walks, including the tree-lined neighborhood walk. The walks also differed on the perceived restorativeness measure of daydreaming in a graded fashion; however, the farmed field walk was found to be less fascinating than all other walks, including both urban walks. Taken together, these results suggest that categorizing environments as “natural versus urban” may gloss over meaningful within-category variability regarding the restorative potential of different physical environments

    Musical instrument familiarity affects statistical learning of tone sequences.

    Get PDF
    Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones

    Of words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speechOf words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speech

    Get PDF
    Statistical learning is an ability that allows individuals to effortlessly extract patterns from the environment, such as sound patterns in speech. Some prior evidence suggests that statistical learning operates more robustly for speech compared to non-speech stimuli, supporting the idea that humans are predisposed to learn language. However, any apparent statistical learning advantage for speech could be driven by signal acoustics, rather than the subjective perception per se of sounds as speech. To resolve this issue, the current study assessed whether there is a statistical learning advantage for ambiguous sounds that are subjectively perceived as speech-like compared to the same sounds perceived as non-speech, thereby controlling for acoustic features. We first induced participants to perceive sine-wave speech (SWS)—a degraded form of speech not immediately perceptible as speech—as either speech or non-speech. After this induction phase, participants were exposed to a continuous stream of repeating trisyllabic nonsense words, composed of SWS syllables, and then completed an explicit familiarity rating task and an implicit target detection task to assess learning. Critically, participants showed robust and equivalent performance on both measures, regardless of their subjective speech perception. In contrast, participants who perceived the SWS syllables as more speech-like showed better detection of individual syllables embedded in speech streams. These results suggest that speech perception facilitates processing of individual sounds, but not the ability to extract patterns across sounds. Our findings suggest that statistical learning is not influenced by the perceived linguistic relevance of sounds, and that it may be conceptualized largely as an automatic, stimulus-driven mechanism

    The effect of musical training on speech and sound perception

    Get PDF
    We are going to carry out such a study, in conjunction with research labs at five other institutions. With six universities involved, we will be able to recruit a sufficiently large number of people in the study and decrease the likelihood of any regional bias influencing the outcomes. We will be trying to validate the following claims: that musicians have an improved ability to understand speech in noisy environments that the responses of a musician\u27s brainstem to speech sounds is enhanced, and that older musicians have reduced symptoms from age-related hearing loss.https://ir.lib.uwo.ca/brainscanprojectsummaries/1036/thumbnail.jp

    Social Symptoms of Parkinson\u27s Disease

    Get PDF
    © 2020 Margaret T. M. Prenger et al. Parkinson\u27s disease (PD) is typically well recognized by its characteristic motor symptoms (e.g., bradykinesia, rigidity, and tremor). The cognitive symptoms of PD are increasingly being acknowledged by clinicians and researchers alike. However, PD also involves a host of emotional and communicative changes which can cause major disruptions to social functioning. These incude problems producing emotional facial expressions (i.e., facial masking) and emotional speech (i.e., dysarthria), as well as difficulties recognizing the verbal and nonverbal emotional cues of others. These social symptoms of PD can result in severe negative social consequences, including stigma, dehumanization, and loneliness, which might affect quality of life to an even greater extent than more well-recognized motor or cognitive symptoms. It is, therefore, imperative that researchers and clinicans become aware of these potential social symptoms and their negative effects, in order to properly investigate and manage the socioemotional aspects of PD. This narrative review provides an examination of the current research surrounding some of the most common social symptoms of PD and their related social consequences and argues that proactively and adequately addressing these issues might improve disease outcomes

    Lockdown, bottoms up? Changes in adolescent substance use across the COVID-19 pandemic

    Get PDF
    The COVID-19 pandemic notably altered adolescent substance use during the initial stage (Spring 2020) of the pandemic. The purpose of this longitudinal study is to examine trajectories of adolescent substance use across the pandemic and subsequent periods of stay-at-home orders and re-opening efforts. We further examined differences as a function of current high school student versus graduate status. Adolescents (n = 1068, 14–18 years, Mage = 16.95 years and 76.7% female at T1) completed 4 different self-report surveys, starting during the first stay-at-home order and ending approximately 14 months later. Negative binomial hurdle models predicted: (1) the likelihood of no substance use and (2) frequency of days of substance use. As hypothesized, results demonstrated significant increases in adolescents’ likelihood of alcohol use, binge drinking, and cannabis use once initial stay-at-home orders were lifted, yet few changes occurred as a result of a second stay-at-home order, with rates never lowering again to that of the first lockdown. Further, graduates (and particularly those who transitioned out of high school during the study) demonstrated a greater likelihood and frequency of substance use and were more stable in their trajectories across periods of stay-at-home orders than current high school students. Unexpectedly, however, there was a strong increase in current high school students’ likelihood of e-cigarette use and a significant linear increase in participants’ frequency of e-cigarette use over the study. Results suggest adolescent substance use, and in particular, e-cigarette use among current high school students, may be of increasing concern as the pandemic evolves

    Is Hey Jude in the Right Key? Cognitive Components of Absolute Pitch Memory

    Get PDF
    Most individuals, regardless of formal musical training, have long-term absolute pitch memory (APM) for familiar musical recordings, though with varying levels of accuracy. The present study followed up on recent evidence suggesting an association between singing accuracy and APM (Halpern & Pfordresher, 2022, Attention, Perception, & Psychophysics, 84(1), 260–269), as well as tonal short-term memory (STM) and APM (Van Hedger et al., 2018, Quarterly Journal of Experimental Psychology, 71(4), 879–891). Participants from three research sites (n = 108) completed a battery of tasks including APM, tonal STM, singing accuracy, and self-reported auditory imagery. Both tonal STM and singing accuracy predicted APM, replicating prior results. Tonal STM also predicted singing accuracy, music training, and auditory imagery. Further tests suggested that the association between APM and singing accuracy was fully mediated by tonal STM. This pattern comports well with models of vocal pitch matching that include STM for pitch as a mechanism for sensorimotor translation

    Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning

    Get PDF
    The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic–phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest–posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1–P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1–P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity
    • …
    corecore