27 research outputs found

    Simulated nature walks improve psychological well-being along a natural to urban continuum

    Get PDF
    Compared to urban environments, interactions with natural environments have been associated with several health benefits including psychological restoration and improved emotional well-being. However, dichotomizing environments as either natural or urban may emphasize between-category differences and minimize potentially important within-category variation (e.g., forests versus fields of crops; neighborhoods versus city centers). Therefore, the current experiment assessed how viewing brief videos of different environments, ranging along a continuum from stereotypically natural to stereotypically urban, influenced subjective ratings of mood, restoration, and well-being. Participants (n = 202) were randomly assigned to one of four video conditions, which depicted a simulated walk through a pine forest, a farmed field, a tree-lined urban neighborhood, or a bustling city center. Immediately before and after the videos, participants rated their current emotional states. Participants additionally rated the perceived restorativeness of the video. The results supported the idea that the virtual walks differentially influenced affect and perceived restoration, even when belonging to the same nominal category of natural or urban. The pine forest walk significantly improved happiness relative to both urban walks, whereas the farmed field walk did not. The bustling city center walk decreased feelings of calmness compared to all other walks, including the tree-lined neighborhood walk. The walks also differed on the perceived restorativeness measure of daydreaming in a graded fashion; however, the farmed field walk was found to be less fascinating than all other walks, including both urban walks. Taken together, these results suggest that categorizing environments as “natural versus urban” may gloss over meaningful within-category variability regarding the restorative potential of different physical environments

    Musical instrument familiarity affects statistical learning of tone sequences.

    Get PDF
    Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones

    Of words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speechOf words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speech

    Get PDF
    Statistical learning is an ability that allows individuals to effortlessly extract patterns from the environment, such as sound patterns in speech. Some prior evidence suggests that statistical learning operates more robustly for speech compared to non-speech stimuli, supporting the idea that humans are predisposed to learn language. However, any apparent statistical learning advantage for speech could be driven by signal acoustics, rather than the subjective perception per se of sounds as speech. To resolve this issue, the current study assessed whether there is a statistical learning advantage for ambiguous sounds that are subjectively perceived as speech-like compared to the same sounds perceived as non-speech, thereby controlling for acoustic features. We first induced participants to perceive sine-wave speech (SWS)—a degraded form of speech not immediately perceptible as speech—as either speech or non-speech. After this induction phase, participants were exposed to a continuous stream of repeating trisyllabic nonsense words, composed of SWS syllables, and then completed an explicit familiarity rating task and an implicit target detection task to assess learning. Critically, participants showed robust and equivalent performance on both measures, regardless of their subjective speech perception. In contrast, participants who perceived the SWS syllables as more speech-like showed better detection of individual syllables embedded in speech streams. These results suggest that speech perception facilitates processing of individual sounds, but not the ability to extract patterns across sounds. Our findings suggest that statistical learning is not influenced by the perceived linguistic relevance of sounds, and that it may be conceptualized largely as an automatic, stimulus-driven mechanism

    Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning

    Get PDF
    The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic–phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest–posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1–P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1–P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity

    Is Hey Jude in the Right Key? Cognitive Components of Absolute Pitch Memory

    Get PDF
    Most individuals, regardless of formal musical training, have long-term absolute pitch memory (APM) for familiar musical recordings, though with varying levels of accuracy. The present study followed up on recent evidence suggesting an association between singing accuracy and APM (Halpern & Pfordresher, 2022, Attention, Perception, & Psychophysics, 84(1), 260–269), as well as tonal short-term memory (STM) and APM (Van Hedger et al., 2018, Quarterly Journal of Experimental Psychology, 71(4), 879–891). Participants from three research sites (n = 108) completed a battery of tasks including APM, tonal STM, singing accuracy, and self-reported auditory imagery. Both tonal STM and singing accuracy predicted APM, replicating prior results. Tonal STM also predicted singing accuracy, music training, and auditory imagery. Further tests suggested that the association between APM and singing accuracy was fully mediated by tonal STM. This pattern comports well with models of vocal pitch matching that include STM for pitch as a mechanism for sensorimotor translation

    Environmental influences on affect and cognition: A study of natural and commercial semi-public spaces

    Get PDF
    Research has consistently shown differences in affect and cognition after exposure to different physical environments. The time course of these differences emerging or fading during exploration of environments is less explored, as most studies measure dependent variables only before and after environmental exposure. In this within-subject study, we used repeated surveys to measure differences in thought content and affect throughout a 1-h environmental exploration of a nature conservatory and a large indoor mall. At each survey, participants reported on aspects of their most recent thoughts (e.g., thinking of the present moment vs. the future; thinking positively vs. negatively) and state affect. Using Bayesian multi-level models, we found that while visiting the conservatory, participants were more likely to report thoughts about the past, more positive and exciting thoughts, and higher feelings of positive affect and creativity. In the mall, participants were more likely to report thoughts about the future and higher feelings of impulsivity. Many of these differences in environments were present throughout the 1-h walk, however some differences were only evident at intermediary time points, indicating the importance of collecting data during exploration, as opposed to only before and after environmental exposures. We also measured cognitive performance with a dual n-back task. Results on 2-back trials replicated results from prior work that interacting with nature leads to improvements in working-memory performance. This study furthers our understanding of how thoughts and feelings are influenced by the surrounding physical environment and has implications for the design and use of public spaces

    Adaptation in integrated assessment modeling: where do we stand?

    Get PDF
    Adaptation is an important element on the climate change policy agenda. Integrated assessment models, which are key tools to assess climate change policies, have begun to address adaptation, either by including it implicitly in damage cost estimates, or by making it an explicit control variable. We analyze how modelers have chosen to describe adaptation within an integrated framework, and suggest many ways they could improve the treatment of adaptation by considering more of its bottom-up characteristics. Until this happens, we suggest, models may be too optimistic about the net benefits adaptation can provide, and therefore may underestimate the amount of mitigation they judge to be socially optimal. Under some conditions, better modeling of adaptation costs and benefits could have important implications for defining mitigation targets. © Springer Science+Business Media B.V. 2009

    Learning Words without Trying: Daily Second Language Podcasts Support Word Form Learning in Adults

    No full text
    Spoken language contains overlapping patterns across different levels, from syllables to words to phrases. The discovery of these structures may be partially supported by statistical learning (SL), the unguided, automatic extraction of regularities from the environment through passive exposure. SL supports word learning in artificial language experiments, but few studies have examined whether it scales up to support natural language learning in adult second language learners. Here, adult English speakers (n = 70) listened to daily podcasts in either Italian or English for two weeks while going about their normal routines. To measure word knowledge, participants provided familiarity ratings of Italian words and nonwords both before and after the listening period. Critically, compared to English controls, Italian listeners significantly improved in their ability to discriminate Italian words and nonwords. These results suggest that unguided exposure to natural, foreign language speech supports the extraction of relevant word features and the development of nascent word forms. At a theoretical level, these findings indicate that SL may effectively scale up to support real world language acquisition. These results also have important practical implications, suggesting that adult learners may be able to acquire relevant speech patterns and initial word forms simply by listening to the language. This form of learning can occur without explicit effort, formal instruction or focused study

    Absolute pitch can be learned by some adults.

    No full text
    Absolute pitch (AP), the rare ability to name any musical note without the aid of a reference note, is thought to depend on an early critical period of development. Although recent research has shown that adults can improve AP performance in a single training session, the best learners still did not achieve note classification levels comparable to performance of a typical, "genuine" AP possessor. Here, we demonstrate that these "genuine" levels of AP performance can be achieved within eight weeks of training for at least some adults, with the best learner passing all measures of AP ability after training and retaining this knowledge for at least four months after training. Alternative explanations of these positive results, such as improving accuracy through adopting a slower, relative pitch strategy, are not supported based on joint analyses of response time and accuracy. The results also did not appear to be driven by extreme familiarity with a single instrument or octave range, as the post-training AP assessments used eight different timbres and spanned over seven octaves. Yet, it is also important to note that a majority of the participants only exhibited modest improvements in performance, suggesting that adult AP learning is difficult and that near-perfect levels of AP may only be achievable by subset of adults. Overall, these results demonstrate that explicit perceptual training in some adults can lead to AP performance that is behaviorally indistinguishable from AP that manifests within a critical period of development. Implications for theories of AP acquisition are discussed

    Perceptual Plasticity for Auditory Object Recognition

    No full text
    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed
    corecore