22 research outputs found

    Can music be figurative? Exploring the possibility of crossmodal similarities between music and visual arts

    Get PDF
    According to both experimental research and common sense, classical music is a better fit for figurative art than jazz. We hypothesize that similar fits may reflect underlying crossmodal structural similarities between music and painting genres. We present two preliminary studies aimed at addressing our hypothesis. Experiment 1 tested the goodness of the fit between two music genres (classical and jazz) and two painting genres (figurative and abstract). Participants were presented with twenty sets of six paintings (three figurative, three abstract) viewed in combination with three sound conditions: 1) silence, 2) classical music, or 3) jazz. While figurative paintings scored higher aesthetic appreciation than abstract ones, a gender effect was also found: the aesthetic appreciation of paintings in male participants was modulated by music genre, whilst music genre did not affect the aesthetic appreciation in female participants. Our results support only in part the notion that classical music enhances the aesthetic appreciation of figurative art. Experiment 2 aimed at testing whether the conceptual categories ‘figurative’ and ‘abstract’ can be extended also to music. In session 1, participants were first asked to classify 30 paintings (10 abstract, 10 figurative, 10 ambiguous that could fit either category) as abstract or figurative and the to rate them for pleasantness; in session 2 participants were asked to classify 40 excerpts of music (20 classical, 20 jazz) as abstract or figurative and to rate them for pleasantness. Paintings which were clearly abstract or figurative were all classified accordingly, while the majority of ambiguous paintings were classified as abstract. Results also show a gender effect for painting’s pleasantness: female participants rated higher ambiguous and abstract paintings. More interestingly, results show an effect of music genre on classification, showing that it is possible to classify music as figurative or abstract, thus supporting the hypothesis of cross-modal similarities between the two sensory-different artistic expressions

    Which words are most iconic?:Iconicity in English sensory words

    Get PDF
    Some spoken words are iconic, exhibiting a resemblance between form and meaning. We used native speaker ratings to assess the iconicity of 3001 English words, analyzing their iconicity in relation to part-of-speech differences and differences between the sensory domain they relate to (sight, sound, touch, taste and smell). First, we replicated previous findings showing that onomatopoeia and interjections were highest in iconicity, followed by verbs and adjectives, and then nouns and grammatical words. We further show that words with meanings related to the senses are more iconic than words with abstract meanings. Moreover, iconicity is not distributed equally across sensory modalities: Auditory and tactile words tend to be more iconic than words denoting concepts related to taste, smell and sight. Last, we examined the relationship between iconicity (resemblance between form and meaning) and systematicity (statistical regularity between form and meaning). We find that iconicity in English words is more strongly related to sensory meanings than systematicity. Altogether, our results shed light on the extent and distribution of iconicity in modern English

    Children and adults' understanding and use of sound-symbolism in novel words

    Get PDF
    Sound-symbolism is the inherent link between the sound of a word and its meaning. The aim of this thesis is to gain an insight into the nature of sound-symbolism. There are five empirical chapters, each of which aims to uncover children and adults’ understanding of sound-symbolic words. Chapter 1 is a literature review of sound-symbolism. Chapter 2 is a cross-linguistic developmental study looking at the acquisition of sound-symbolism. Chapter 3 looks at childrens use of sound-symbolism in a verb-learning task. Chapter 4 looks at childrens use of sound-symbolism when learning and memorising novel verbs. Chapter 5 consists of two experiments looking at what exact part of a word is sound-symbolic. This study compared different types of consonants and vowels, across a number of domains in an attempt to gain an understanding of the nature of sound-symbolism. Chapter 6 looks at the potential mechanisms by which sound-symbolism is understood. This study is a replication of previous research, which found that sound-symbolic sensitivity is increased when the word is said and not just heard. There are therefore a total of five empirical chapters each of which attempts to look at the nature of sound-symbolic meaning from a slightly different angle

    Sound symbolism and the Bouba-Kiki effect : uniting function and mechanism in the search for language universals

    Get PDF
    x, 166 leaves ; 29 cmIn contemporary linguistics, the relationship between word form and meaning is assumed to be arbitrary: words are mutually agreed upon symbolic tokens whose form is entirely unrelated to the objects or events that they denote. Despite the dictum of arbitrariness, examples of correspondences between word form and meaning have been studied periodically over the last century, labeled collectively under the banner of sound symbolism. To investigate the phenomenon of sound symbolism, a series of experiments was conducted on the classic Bouba-Kiki phenomenon. These experiments not only addressed previous methodological and interpretive issues of earlier research, but also entailed a more fine grained approach that allowed for a clearer delineation of the word characteristics responsible for sound symbolic biases. An interpretation of the findings of these experiments is presented that is both in line with potential functional accounts of sound symbolism and also grounded in probable mechanistic instantiations

    Crossmodal correspondences: A tutorial review

    Full text link

    Synaesthetic Resonances in the Intermedial Soundtrack of Imitating the Dog’s Tales from The Bar of Lost Souls

    Get PDF
    As with most hybrid performances the experience of spectating Imitating the Dog's Tales from the Bar of Lost Souls (2010) (reworked to become 6 Degrees Below the Horizon (2011)) is a multifaceted one, consisting of an encounter with intermedial bodies, spaces, and technologies. Focusing mainly on the role of the soundtrack, composed by Hope and Social and myself, I will explore the ways in which it elicits synaesthetic experiences turning this intermedial work into a 'playground' of practice-where modes of seeing, hearing and experiencing cultural constructs may be contested. I will define synaesthesia through the context of its Greek etymology, scientific research and artistic practice relating it specifically to intermedial performance. I will go on to address the case study analytically building upon Josephine Machon's re-worked definition of synaesthetic experiences, namely that of '(syn)aesthetics', arguing that the soundtrack elicits a (syn)aesthetic mode of spectatorial engagement.authorsversionPeer reviewe

    Perceptual Organization

    Get PDF
    Perceiving the world of real objects seems so easy that it is difficult to grasp just how complicated it is. Not only do we need to construct the objects quickly, the objects keep changing even though we think of them as having a consistent, independent existence (Feldman, 2003). Yet, we usually get it right, there are few failures. We can perceive a tree in a blinding snowstorm, a deer bounding across a tree line, dodge a snowball, catch a baseball, detect the crack of a branch breaking in a strong windstorm amidst the rustling of trees, predict the sounds of a dripping faucet, or track a street musician strolling down the road

    Ashitaka: an audiovisual instrument

    Get PDF
    This thesis looks at how sound and visuals may be linked in a musical instrument, with a view to creating such an instrument. Though it appears to be an area of significant interest, at the time of writing there is very little existing - written, or theoretical - research available in this domain. Therefore, based on Michel Chion’s notion of synchresis in film, the concept of a fused, inseparable audiovisual material is presented. The thesis then looks at how such a material may be created and manipulated in a performance situation. A software environment named Heilan was developed in order to provide a base for experimenting with different approaches to the creation of audiovisual instruments. The software and a number of experimental instruments are discussed prior to a discussion and evaluation of the final ‘Ashitaka’ instrument. This instrument represents the culmination of the work carried out for this thesis, and is intended as a first step in identifying the issues and complications involved in the creation of such an instrument

    Audio-visual interactions in manual and saccadic responses

    Get PDF
    Chapter 1 introduces the notions of multisensory integration (the binding of information coming from different modalities into a unitary percept) and multisensory response enhancement (the improvement of the response to multisensory stimuli, relative to the response to the most efficient unisensory stimulus), as well as the general goal of the present thesis, which is to investigate different aspects of the multisensory integration of auditory and visual stimuli in manual and saccadic responses. The subsequent chapters report experimental evidence of different factors affecting the multisensory response: spatial discrepancy, stimulus salience, congruency between cross-modal attributes, and the inhibitory influence of concurring distractors. Chapter 2 reports three experiments on the role of the superior colliculus (SC) in multisensory integration. In order to achieve this, the absence of S-cone input to the SC has been exploited, following the method introduced by Sumner, Adamjee, and Mollon (2002). I found evidence that the spatial rule of multisensory integration (Meredith & Stein, 1983) applies only to SC-effective (luminance-channel) stimuli, and does not apply to SC-ineffective (S-cone) stimuli. The same results were obtained with an alternative method for the creation of S-cone stimuli: the tritanopic technique (Cavanagh, MacLeod, & Anstis, 1987; Stiles, 1959; Wald, 1966). In both cases significant multisensory response enhancements were obtained using a focused attention paradigm, in which the participants had to focus their attention on the visual modality and to inhibit responses to auditory stimuli. Chapter 3 reports two experiments showing the influence of shape congruency between auditory and visual stimuli on multisensory integration; i.e. the correspondence between structural aspects of visual and auditory stimuli (e.g., spiky shape and “spiky” sounds). Detection of audio-visual events was faster for congruent than incongruent pairs, and this congruency effect occurred also in a focused attention task, where participants were required to respond only to visual targets and could ignore irrelevant auditory stimuli. This particular type of cross-modal congruency was been evaluated in relation to the inverse effectiveness rule of multisensory integration (Meredith & Stein, 1983). In Chapter 4, the locus of the cross-modal shape congruency was evaluated applying the race model analysis (Miller, 1982). The results showed that the violation of the model is stronger for some congruent pairings in comparison to incongruent pairings. Evidence of multisensory depression was found for some pairs of incongruent stimuli. These data imply a perceptual locus for the cross-modal shape congruency effect. Moreover, it is evident that multisensoriality does not always induce an enhancement, and in some cases, when the attributes of the stimuli are particularly incompatible, a unisensory response may be more effective that the multisensory one. Chapter 5 reports experiments centred on saccadic generation mechanisms. Specifically, the multisensoriality of the saccadic inhibition (SI; Reingold&Stampe, 2002) phenomenon is investigated. Saccadic inhibition refers to a characteristic inhibitory dip in saccadic frequency beginning 60-70 ms after onset of a distractor. The very short latency of SI suggests that the distractor interferes directly with subcortical target selection processes in the SC. The impact of multisensory stimulation on SI was studied in four experiments. In Experiments 7 and 8, a visual target was presented with a concurrent audio, visual or audio-visual distractor. Multisensory audio-visual distractors induced stronger SI than did unisensory distractors, but there was no evidence of multisensory integration (as assessed by a race model analysis). In Experiments 9 and 10, visual, auditory or audio-visual targets were accompanied by a visual distractor. When there was no distractor, multisensory integration was observed for multisensory targets. However, this multisensory integration effect disappeared in the presence of a visual distractor. As a general conclusion, the results from Chapter 5 results indicate that multisensory integration occurs for target stimuli, but not for distracting stimuli, and that the process of audio-visual integration is itself sensitive to disruption by distractors
    corecore