204 research outputs found

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    MUSIC TO OUR EYES: ASSESSING THE ROLE OF EXPERIENCE FOR MULTISENSORY INTEGRATION IN MUSIC PERCEPTION

    Get PDF
    Based on research on the “McGurk Effect” (McGurk & McDonald, 1976) in speech perception, some researchers (e.g. Liberman & Mattingly, 1985) have argued that humans uniquely interpret auditory and visual (motor) speech signals as a single intended audiovisual articulatory gesture, and that such multisensory integration is innate and specific to language. Our goal for the present study was to determine if a McGurk-like Effect holds true for music perception as well, as a domain for which innateness and experience can be disentangled more easily than in language. We sought to investigate the effects of visual musical information on auditory music perception and judgment, the impact of music experience on such audiovisual integration, and the possible role of eye gaze patterns as a potential mediator for music experience and the extent of visual influence on auditory judgments. 108 participants (ages 18-40) completed a questionnaire and melody/rhythm perception tasks to determine music experience and abilities, and then completed speech and musical McGurk tasks. Stimuli were recorded from five sounds produced by a speaker or musician (cellist and trombonist) that ranged incrementally along a continuum from one type to another (e.g. non-vibrato to strong vibrato). In the audiovisual condition, these sounds were paired with videos of the speaker/performer producing one type of sound or another (representing either end of the continuum) such that the audio and video matched or mismatched to varying degrees. Participants indicated, on a 100-point scale, the extent to which the auditory presentation represents one end of the continuum or the other. Auditory judgments for each sound were then compared based on their visual pairings to determine the impact of visual cues on auditory judgments. Additionally, several types of music experience were evaluated as potential predictors of the degree of influence visual stimuli had on auditory judgments. Finally, eye gaze patterns were measured in a different sample of 15 participants to assess relationships between music experience and eye gaze patterns, and eye gaze patterns and extent of visual on auditory judgments. Results indicated a reliable “musical McGurk Effect” in the context of cello vibrato sounds, but weaker overall effects for trombone vibrato sounds and cello pluck and bow sounds. Limited evidence was found to suggest that music experience impacts the extent to which individuals are influenced by visual stimuli when making auditory judgments. The support that was obtained, however, indicated the possibility for diminished visual influence on auditory judgments based on variables associated with music “production” experience. Potential relationships between music experience and eye-gaze patterns were identified. Implications for audiovisual integration in the context of speech and music perception are discussed, and future directions advised

    Infant and Child Multisensory Attention Skills: Methods, Measures, and Language Outcomes

    Get PDF
    Intersensory processing (e.g., matching sights and sounds based on audiovisual synchrony) is thought to be a foundation for more complex developmental outcomes including language. However, the body of research on intersensory processing is characterized by different measures, paradigms, and research questions, making comparisons across studies difficult. Therefore, Manuscript 1 provides a systematic review and synthesis of research on intersensory processing, integrating findings across multiple methods, along with recommendations for future research. This includes a call for a shift in the focus of intersensory processing research from that of assessing average performance of groups of infants, to one assessing individual differences in intersensory processing. Individual difference measures allow researchers to assess developmental trajectories and understand developmental pathways from basic skills to later outcomes. Bahrick and colleagues introduced the first two new individual difference measures of intersensory processing: The Multisensory Attention Assessment Protocol (MAAP) and The Intersensory Processing Efficiency Protocol (IPEP). My prior research using the MAAP has shown that accuracy of intersensory processing at 12 months of age predicted 18- and 24-month child language outcomes. Moreover, it predicted child language to a greater extent than well-established predictors, including parent language input and SES (Edgar et al., under review)! Manuscript 2 extends this research to examine both speed and accuracy of intersensory processing using the IPEP. A longitudinal sample of 103 infants were tested with the IPEP to assess relations between intersensory processing at 6 months of age and language outcomes at 18, 24, and 36 months, while controlling for traditional predictors, parent language input and SES. Results demonstrate that even at 6 months, intersensory processing predicts 18-, 24-, and 36-month child language skills, over and above the traditional predictors. This novel finding reveals the powerful role of intersensory processing in shaping language development and highlights the importance of incorporating individual differences in intersensory processing as a predictor in models of developmental pathways to language. In turn, these findings can inform interventions where intersensory processing can be used as an early screener for children at risk for language delays

    Children and adults' understanding and use of sound-symbolism in novel words

    Get PDF
    Sound-symbolism is the inherent link between the sound of a word and its meaning. The aim of this thesis is to gain an insight into the nature of sound-symbolism. There are five empirical chapters, each of which aims to uncover children and adults’ understanding of sound-symbolic words. Chapter 1 is a literature review of sound-symbolism. Chapter 2 is a cross-linguistic developmental study looking at the acquisition of sound-symbolism. Chapter 3 looks at childrens use of sound-symbolism in a verb-learning task. Chapter 4 looks at childrens use of sound-symbolism when learning and memorising novel verbs. Chapter 5 consists of two experiments looking at what exact part of a word is sound-symbolic. This study compared different types of consonants and vowels, across a number of domains in an attempt to gain an understanding of the nature of sound-symbolism. Chapter 6 looks at the potential mechanisms by which sound-symbolism is understood. This study is a replication of previous research, which found that sound-symbolic sensitivity is increased when the word is said and not just heard. There are therefore a total of five empirical chapters each of which attempts to look at the nature of sound-symbolic meaning from a slightly different angle

    Phonetic symbolism for size, shape, and motion

    Get PDF
    This thesis examines phonetic symbolism, the meaningful use of individual speech sounds to convey and infer size, shape, and motion. Chapter 1 presents a summary of the literature. Though there is evidence suggesting that phonetic symbolism exists and is pervasive, the literature presents several research opportunities. In nine experiments and one pre-test (total N = 357 participants), we use graded stimuli throughout, which is uncommon in the previous research. This use of non-dichotomous stimuli allows for the hypotheses that have arisen from a gestural model of language evolution and the Frequency Code to be more fully investigated. In the first set of experiments (Chapter 2), we demonstrate that phonetic marking for size is graded, i.e., it does not mark just very large and very small objects. In Chapter 3, the focus is on marking for size and shape, and their possible interactions. We show that marking for size and for shape are not as in line with each other as previous works might suggest. Marking for movement is the topic of Chapter 4, which includes moving stimuli, not just implied motion. We find that trait permanence is at play with the naming for motion tasks, with marking only occurring when naming the motion itself. Finally, a concluding chapter summarizes and further expounds on the results of the thesis, and how those results relate to the hypotheses suggested by gestural models and frequency code. The conclusion also includes a section of current and future research directions

    An experimental study of iconicity in the Arabic vocabulary of the Qur’an

    Get PDF
    This thesis begins with two interests: a personal desire to expand upon and apply modern scientific linguistic study to Muslim holy writ, and a more general desire to work with and study the growing field of iconicity in applied linguistics. As a Muslim, the Qur'an is a book close to my heart, and so, initially, when beginning, I had wished to complete a comprehensive series of iconicity experiments, studying large swathes of the Qur'an, utilising this Qur'anic stimuli for participant studies, administering entire verses of the Holy Book to non-Arabic speakers and analyzing their perception of linguistic iconicity from said verses. As the thesis moved along, it became clear even with a word limit of 40,000, this was impossible, thus, the paper focuses on not the entirety of the Qur'an, or even complete chapters or verses, but 100 words. The Qur'an is used as a text of study, but also a platform from which we compiled the stimuli for experimental work on, fundamentally, words from (Classical) Arabic, as they are found in the Qur'an from the 7th Century. That is the brief summary of how the thesis sculpted itself into what it is today, with a very specific substrate of stimuli studied. But now with this general outline in place, the questions remain, why the Qur'an? And equally saliently, what is iconicity? The Qur'an is the Muslim Holy book, and the most memorized book in the modern world (Graham, 1993:80). What is perhaps more striking, and attractive for linguistic and philological study, is that the vast majority of people who have memorized the Qur'an are not fluent in Arabic, let alone the classical language that the text employs (Ariffin et al., 2000, 2015). The average Muslim who typically does not understand Arabic will attain proficiency in rote reading/reciting of the Qur'an in absence of semantic comprehension (Riddel et al., 1997). Despite the above, studies indicate the Qur'anic script is largely memorized with ease (Boyle, 2000; Slamet, 2019; Yusuf, 2010), with a number of studies finding the meaning vivid and easy to visualize for non-native speakers and readers (Boyle, 2006; Nawaz & Jahangir, 2015). Additionally, the Qur'an itself claims uniqueness in its stylistic marvel, its eloquence and its brevity (Armstrong, 1999; Lings & Barrett, 1983; Versteegh, 2014). There have been countless studies of the Qur'an in multiple languages, but what has not been done completely, is to apply modern applied linguistic methodology to the lexis that comprise the book. And so, whilst the idea of Qur'anic memorization or visuality will not be the focus of this paper, the ultimate goal of this thesis is to connect the Qur'an with one potential cause of these phenomena and its claimed linguistic marvel: iconicity. In linguistics, broadly speaking, iconicity is the understanding that a word can ‘sound like what it means’, or more specifically that the form of a word can in some way resemble its meaning (Dingemanse et al., 2015; Perniss & Vigliocco, 2014; see Chapter 1.1 for a detailed definition). As such, the current paper is centred around the Qur'an and iconicity. Does iconicity, a concept that has been studied in Japanese, Korean, English, Dutch and other languages, exist in Classical Arabic? If so, to what extent? And how can iconicity in the Qur'an benefit the ones learning the Qur'an, or perhaps learning Arabic as a whole? These are the main questions that this paper asks and aims to address, namely through drawing on previous studies in linguistic studies of sound-symbolism, and motivated by the Qur'an, taking words from the Qur'an and placing them under the microscope for thorough linguistic analysis. It should be clear now that the Qur'an is the subject of analysis insofar as iconicity research as modern empirical methods of iconicity research have not been applied to the Qur'an whatsoever. We will therefore learn something about this text first and foremost, but can then extend the findings to make comparisons between parts of speech and second-language vs native-speaker perceptions of iconicity. We see how different groups gauge iconicity in the Qur'an, and this then leads us to isolate specific words that are more iconic than others, which in turn can be tuned for language-learning of Arabic later down the line. The motivation to link these is that it allows for an objective analysis of some Qur'anic linguistic traits while also providing practical benefit to language-learners. Chapter 1 will discuss previous literature in regards to iconicity as a phenomenon, with the aim of building a case for the existence of iconicity in the Muslim holy book. Chapter 2 and 3 will then move on to exploring a combined task-set constituting the present study of Qur'anic words: a pair of mixed-method experiments examining the extent to which iconicity is perceived by different groups of participants when present with Qur'anic words. The paper will conclude with Chapter 4, tying together how findings may be considered in light of other literature and how the study may inform our current understanding of both iconicity, iconicity testing, and the Qur'an

    The building blocks of sound symbolism

    Get PDF
    Languages contain thousands of words each and are made up by a seemingly endless collection of sound combinations. Yet a subsection of these show clear signs of corresponding word shapes for the same meanings which is generally known as vocal iconicity and sound symbolism. This dissertation explores the boundaries of sound symbolism in the lexicon from typological, functional and evolutionary perspectives in an attempt to provide a deeper understanding of the role sound symbolism plays in human language. In order to achieve this, the subject in question was triangulated by investigating different methodologies which included lexical data from a large number of language families, experiment participants and robust statistical tests.Study I investigates basic vocabulary items in a large number of language families in order to establish the extent of sound symbolic items in the core of the lexicon, as well as how the sound-meaning associations are mapped and interconnected. This study shows that by expanding the lexical dataset compared to previous studies and completely controlling for genetic bias, a larger number of sound-meaning associations can be established. In addition, by placing focus on the phonetic and semantic features of sounds and meanings, two new types of sounds symbolism could be established, along with 20 semantically and phonetically superordinate concepts which could be linked to the semantic development of the lexicon.Study II explores how sound symbolic associations emerge in arbitrary words through sequential transmission over language users. This study demonstrates that transmission of signals is sufficient for iconic effects to emerge and does not require interactional communication. Furthermore, it also shows that more semantically marked meanings produce stronger effects and that iconicity in the size and shape domains seems to be dictated by similarities between the internal semantic relationships of each oppositional word pair and its respective associated sounds.Studies III and IV use color words to investigate differences and similarities between low-level cross-modal associations and sound symbolism in lexemes. Study III explores the driving factors of cross-modal associations between colors and sounds by experimentally testing implicit preferences between several different acoustic and visual parameters. The most crucial finding was that neither specific hues nor specific vowels produced any notable effects and it is therefore possible that previously reported associations between vowels and colors are actually dependent on underlying visual and acoustic parameters.Study IV investigates sound symbolic associations in words for colors in a large number of language families by correlating acoustically described segments with luminance and saturation values obtained from cross-linguistic color-naming data. In accordance with Study III, this study showed that luminance produced the strongest results and was primarily associated with vowels, while saturation was primarily associated with consonants. This could then be linked to cross-linguistic lexicalization order of color words.To summarize, this dissertation shows the importance of studying the underlying parameters of sound symbolism semantically and phonetically in both language users and cross-linguistic language data. In addition, it also shows the applicability of non-arbitrary sound-meaning associations for gaining a deeper understanding of how linguistic categories have developed evolutionarily and historically

    Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution

    No full text
    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations
    • 

    corecore