84 research outputs found

    The phylogenetic origin and mechanism of sound symbolism - the role of action-perception circuits

    Get PDF
    As opposed to the classic Saussurean view on the arbitrary relationship between linguistic form and meaning, non-arbitrariness is a pervasive feature in human language. Sound symbolism—namely, the intrinsic relationship between meaningless speech sounds and visual shapes—is a typical case of non-arbitrariness. A demonstration of sound symbolism is the “maluma-takete” effect, in which immanent links are observed between meaningless ‘round’ or ‘sharp’ speech sounds (e.g., maluma vs. takete) and round or sharp abstract visual shapes, respectively. An extensive amount of empirical work suggests that these mappings are shared by humans and play a distinct role in the emergence and acquisition of language. However, important questions are still pending on the origins and mechanism of sound symbolic processing. Those questions are addressed in the present work. The first part of this dissertation focuses on the validation of sound symbolic effects in a forced choice task, and on the interaction of sound symbolism with two crossmodal mappings shared by humans. To address this question, human subjects were tested with a forced choice task on sound symbolic mappings crossed with two crossmodal audiovisual mappings (pitch-shape and pitch-spatial position). Subjects performed significantly above chance only for the sound symbolic associations but not for the other two mappings. Sound symbolic effects were replicated, while the other two crossmodal mappings involving low-level audiovisual properties, such as pitch and spatial position, did not emerge. The second issue examined in the present dissertation are the phylogenetic origins of sound symbolic associations. Human subjects and a group of touchscreen trained great apes were tested with a forced choice task on sound symbolic mappings. Only humans were able to process and/or infer the links between meaningless speech sounds and abstract shapes. These results reveal, for the first time, the specificity of humans’ sound symbolic ability, which can be related to neurobiological findings on the distinct development and connectivity of the human language network. The last part of the dissertation investigates whether action knowledge and knowledge of the perceptual outputs of our actions can provide a possible explanation of sound symbolic mappings. In a series of experiments, human subjects performed sound symbolic mappings, and mappings of ‘round’ or ‘sharp’ hand actions sounds with the shapes produced by these hand actions. In addition, the auditory and visual stimuli of both conditions were crossed. Subjects significantly detected congruencies for all mappings, and most importantly, a positive correlation was observed in their performances across conditions. Physical acoustic and visual similarities between the audiovisual byproducts of our hand actions with the sound symbolic pseudowords and shapes show that the link between meaningless speech sounds and abstract visual shapes is found in action knowledge. From a neurobiological perspective the link between actions and the audiovisual by-products of our actions is also in accordance with distributed action perception circuits in the human brain. Action-perception circuits, supported by the human neuroanatomical connectivity between auditory, visual, and motor cortices, and under associative learning, emerge and carry the perceptual and motor knowledge of our actions. These findings give a novel explanation for how symbolic communication is linked to our sensorimotor experiences. To sum up, the present dissertation (i) validates the presence of sound symbolic effects in a forced choice task, (ii) shows that sound symbolic ability is specific to humans, and (iii) that action knowledge can provide the mechanistic glue of mapping meaningless speech sounds to abstract shapes. Overall, the present work contributes to a better understanding of the phylogenetic origins and mechanism of sound symbolic ability in humans.Im Gegensatz zur klassischen Saussureschen Ansicht ĂŒber die willkĂŒrliche Beziehung zwischen sprachlicher Form und Bedeutung ist die NichtwillkĂŒrlichkeit ein durchdringendes Merkmal der menschlichen Sprache. Lautsymbolik—nĂ€mlich die intrinsische Beziehung zwischen bedeutungslosen Sprachlauten und visuellen Formen—ist ein typischer Fall von NichtwillkĂŒrlichkeit. Ein Beispiel fĂŒr Klangsymbolik ist der “malumatakete” Effekt, bei dem immanente Verbindungen zwischen bedeutungslosen ‘runden’ oder ‘scharfen’ Sprachlauten (z.B. maluma vs. takete) und runden bzw. scharfen abstrakten visuellen Formen beobachtet werden. Umfangreiche empirische Arbeiten legen nahe, dass diese Zuordnungen von Menschen vorgenommen werden und bei der Entstehung und dem Erwerb von Sprache eine besondere Rolle spielen. Wichtige Fragen zu Ursprung und Mechanismus der Verarbeitung von Lautsymbolen sind jedoch noch offen. Diese Fragen werden in der vorliegenden Arbeit behandelt. Der erste Teil dieser Dissertation konzentriert sich auf die Validierung von klangsymbolischen Effekten in einer Forced-Choice-Auswahlaufgabe (erzwungene Wahl) und auf die Interaktion von Klangsymbolik mit zwei crossmodalen Mappings, die von Menschen vorgenommen werden. Um dieser Frage nachzugehen, wurden menschliche Probanden mit einer Auswahlaufgabe mit zwei Auswahlmöglichkeiten auf klangsymbolische Zuordnungen getestet , die mit zwei crossmodalen audiovisuellen Zuordnungen (Tonhöhenform und Tonhöhen-Raum-Position) gekreuzt wurden. Die Versuchspersonen erbrachten nur bei den klangsymbolischen Assoziationen eine signifikant ĂŒber dem Zufall liegende Leistung, nicht aber bei den beiden anderen Zuordnungen. Tonsymbolische Effekte wurden repliziert, wĂ€hrend die beiden anderen crossmodalen Zuordnungen, die audiovisuelle Eigenschaften auf niedriger Ebene wie Tonhöhe und rĂ€umliche Position beinhalteten, nicht auftraten. Das zweite Thema, das in der vorliegenden Dissertation untersucht wird, sind die phylogenetischen UrsprĂŒnge der klangsymbolischen Assoziationen. Menschliche Versuchspersonen und eine Gruppe von Menschenaffen, die auf Touchscreens trainiert wurden, wurden mit einer Forced-Choice-Aufgabe auf klangsymbolische Zuordnungen getestet. Nur Menschen waren in der Lage, die Verbindungen zwischen bedeutungslosen Sprachlauten und abstrakten Formen zu verarbeiten und/oder abzuleiten. Diese Ergebnisse zeigen zum ersten Mal die SpezifitĂ€t der lautsymbolischen FĂ€higkeit des Menschen, die mit neurobiologischen Erkenntnissen ĂŒber die ausgeprĂ€gte Entwicklung und KonnektivitĂ€t des menschlichen Sprachnetzwerks in Verbindung gebracht werden kann. Der letzte Teil der Dissertation untersucht darĂŒber hinaus, ob Handlungswissen und das Wissen um die Wahrnehmungsergebnisse unserer Handlungen eine mögliche ErklĂ€rung fĂŒr solide symbolische Mappings liefern können. In einer Reihe von Experimenten fĂŒhrten menschliche Versuchspersonen klangsymbolische Mappings durch sowie Mappings von ‘runden’ oder ‘scharfen’ Handaktionen KlĂ€nge mit den durch diese Handaktionen erzeugten Formen. DarĂŒber hinaus wurden die auditiven und visuellen Reize beider Bedingungen gekreuzt. Die Probanden stellten bei allen Zuordnungen signifikant Kongruenzen fest, und, was am wichtigsten war, es wurde eine positive Korrelation ihrer Leistungen unter allen Bedingungen beobachtet. Physikalische akustische und visuelle Ähnlichkeiten zwischen den audiovisuellen Nebenprodukten unserer Handaktionen mit den klangsymbolischen Pseudowörtern und Formen zeigen, dass die Verbindung zwischen bedeutungslosen Sprachlauten und abstrakten visuellen Formen im Handlungswissen zu finden ist. Aus neurobiologischer Sicht stimmt die Verbindung zwischen Handlungen und den audiovisuellen Nebenprodukten unserer Handlungen auch mit den verteilten Handlungs- und WahrnehmungskreislĂ€ufen im menschlichen Gehirn ĂŒberein. Aktions- Wahrnehmungsnetzwerken, die durch die neuroanatomische KonnektivitĂ€t zwischen auditorischen, visuellen und motorischen kortikalen Arealen des Menschen unterstĂŒtzt werden, entstehen und tragen unter assoziativem Lernen das perzeptuelle und motorische Wissen unserer Handlungen. Diese Erkenntnisse geben eine neuartige ErklĂ€rung dafĂŒr, wie symbolische Kommunikation in unseren sensomotorischen Erfahrungen verknĂŒpft ist. Zusammenfassend lĂ€sst sich sagen, dass die vorliegende Dissertation (i) das Vorhandensein von lautsymbolischen Effekten in einer Forced-Choice-Aufgabe validiert, (ii) zeigt, dass lautsymbolische FĂ€higkeiten spezifisch fĂŒr Menschen sind, und (iii) dass Handlungswissen den mechanistischen Klebstoff liefern kann, um bedeutungslose Sprachlaute auf abstrakte Formen abzubilden. Insgesamt trĂ€gt die vorliegende Arbeit zu einem besseren VerstĂ€ndnis der phylogenetischen UrsprĂŒnge und des Mechanismus der lautsymbolischen FĂ€higkeit des Menschen bei

    How touch and hearing influence visual processing in sensory substitution, synaesthesia and cross-modal correspondences

    Get PDF
    Sensory substitution devices (SSDs) systematically turn visual dimensions into patterns of tactile or auditory stimulation. After training, a user of these devices learns to translate these audio or tactile sensations back into a mental visual picture. Most previous SSDs translate greyscale images using intuitive cross-sensory mappings to help users learn the devices. However more recent SSDs have started to incorporate additional colour dimensions such as saturation and hue. Chapter two examines how previous SSDs have translated the complexities of colour into hearing or touch. The chapter explores if colour is useful for SSD users, how SSD and veridical colour perception differ and how optimal cross-sensory mappings might be considered. After long-term training, some blind users of SSDs report visual sensations from tactile or auditory stimulation. A related phenomena is that of synaesthesia, a condition where stimulation of one modality (i.e. touch) produces an automatic, consistent and vivid sensation in another modality (i.e. vision). Tactile-visual synaesthesia is an extremely rare variant that can shed light on how the tactile-visual system is altered when touch can elicit visual sensations. Chapter three reports a series of investigations on the tactile discrimination abilities and phenomenology of tactile-vision synaesthetes, alongside questionnaire data from synaesthetes unavailable for testing. Chapter four introduces a new SSD to test if the presentation of colour information in sensory substitution affects object and colour discrimination. Chapter five presents experiments on intuitive auditory-colour mappings across a wide variety of sounds. These findings are used to predict the reported colour hallucinations resulting from LSD use while listening to these sounds. Chapter six uses a new sensory substitution device designed to test the utility of these intuitive sound-colour links for visual processing. These findings are discussed with reference to how cross-sensory links, LSD and synaesthesia can inform optimal SSD design for visual processing

    Investigating the Cognitive and Neural Mechanisms underlying Multisensory Perceptual Decision-Making in Humans

    Get PDF
    On a frequent day-to-day basis, we encounter situations that require the formation of decisions based on ambiguous and often incomplete sensory information. Perceptual decision-making defines the process by which sensory information is consolidated and accumulated towards one of multiple possible choice alternatives, which inform our behavioural responses. Perceptual decision-making can be understood both theoretically and neurologically as a process of stochastic sensory evidence accumulation towards some choice threshold. Once this threshold is exceeded, a response is facilitated, informing the overt actions undertaken. Prevalent progress has been made towards understanding the cognitive and neural mechanisms underlying perceptual decision-making. Analyses of Reaction Time (RTs; typically constrained to milliseconds) and choice accuracy; reflecting decision-making behaviour, can be coupled with neuroimaging methodologies; notably electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI), to identify spatiotemporal components representative of the neural signatures corresponding to such accumulation-to-bound decision formation on a single-trial basis. Taken together, these provide us with an experimental framework conceptualising the key computations underlying perceptual decision-making. Despite this, relatively little remains known about the enhancements or alternations to the process of perceptual decision-making from the integration of information across multiple sensory modalities. Consolidating the available sensory evidence requires processing information presented in more than one sensory modality, often near-simultaneously, to exploit the salient percepts for what we term as multisensory (perceptual) decision-making. Specifically, multisensory integration must be considered within the perceptual decision-making framework in order to understand how information becomes stochastically accumulated to inform overt sensory-motor choice behaviours. Recently, substantial progress in research has been made through the application of behaviourally-informed, and/or neurally-informed, modelling approaches to benefit our understanding of multisensory decision-making. In particular, these approaches fit a number of model parameters to behavioural and/or neuroimaging datasets, in order to (a) dissect the constituent internal cognitive and neural processes underlying perceptual decision-making with both multisensory and unisensory information, and (b) mechanistically infer how multisensory enhancements arise from the integration of information across multiple sensory modalities to benefit perceptual decision formation. Despite this, the spatiotemporal locus of the neural and cognitive underpinnings of enhancements from multisensory integration remains subject to debate. In particular, our understanding of which brain regions are predictive of such enhancements, where they arise, and how they influence decision-making behaviours requires further exploration. The current thesis outlines empirical findings from three studies aimed at providing a more complete characterisation of multisensory perceptual decision-making, utilising EEG and accumulation-to-bound modelling methodologies to incorporate both behaviourally-informed and neurally-informed modelling approaches, investigating where, when, and how perceptual improvements arise during multisensory perceptual decision-making. Pointedly, these modelling approaches sought to probe the exerted modulatory influences of three factors: unisensory formulated cross-modal associations (Chapter 2), natural ageing (Chapter 3), and perceptual learning (Chapter 4), on the integral cognitive and neural mechanisms underlying observable benefits towards multisensory decision formation. Chapter 2 outlines secondary analyses, utilising a neurally-informed modelling approach, characterising the spatiotemporal dynamics of neural activity underlying auditory pitch-visual size cross-modal associations. In particular, how unisensory auditory pitch-driven associations benefit perceptual decision formation was functionally probed. EEG measurements were recorded from participants during performance of an Implicit Association Test (IAT), a two-alternative forced-choice (2AFC) paradigm which presents one unisensory stimulus feature per trial for participants to categorise, but manipulates the stimulus feature-response key mappings of auditory pitch-visual size cross-modal associations from unisensory stimuli alone, thus overcoming the issue of mixed selectivity in recorded neural activity prevalent in previous cross-modal associative research, which near-simultaneously presented multisensory stimuli. Categorisations were faster (i.e., lower RTs) when stimulus feature-response key mappings were associatively congruent, compared to associatively incongruent, between the two associative counterparts, thus demonstrating a behavioural benefit to perceptual decision formation. Multivariate Linear Discriminant Analysis (LDA) was used to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance, in which two EEG components were identified that discriminated neural activity underlying the benefits of associative congruency of stimulus feature-response key mappings. Application of a neurally-informed Hierarchical Drift Diffusion Model (HDDM) demonstrated early sensory processing benefits, with increases in the duration of non-decisional processes with incongruent stimulus feature-response key mappings, and late post-sensory alterations to decision dynamics, with congruent stimulus feature-response key mappings decreasing the quantity of evidence required to facilitate a decision. Hence, we found that the trial-by-trial variability in perceptual decision formation from unisensory facilitated cross-modal associations could be predicted by neural activity within our neurally-informed modelling approach. Next, Chapter 3 outlines cognitive research investigating age-related impacts on the behavioural indices of multisensory perceptual decision-making (i.e., RTs and choice accuracy). Natural ageing has been demonstrated to diversely affect multisensory perceptual decision-making dynamics. However, the constituent cognitive processes affected remain unclear. Specifically, a mechanistic insight reconciling why older adults may exhibit preserved multisensory integrative benefits, yet display generalised perceptual deficits, relative to younger adults, remains inconclusive. To address this limitation, 212 participants performed an online variant of a well-established audiovisual object categorisation paradigm, whereby age-related differences in RTs and choice accuracy (binary responses) between audiovisual (AV), visual (V), and auditory (A) trial types could be assessed between Younger Adults (YAs; Mean ± Standard Deviation = 27.95 ± 5.82 years) and Older Adults (OAs; Mean ± Standard Deviation = 60.96 ± 10.35 years). Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants’ RTs and binary responses in order to probe age-related impacts on the latent underlying processes of multisensory decision formation. Behavioural results found that whereas OAs were typically slower (i.e., ↑ RTs) and less accurate (i.e., ↓ choice accuracy), relative to YAs across all sensory trial types, they exhibited greater differences in RTs between AV and V trials (i.e., ↑ AV-V RT difference), with no significant effects of choice accuracy, implicating preserved benefits of multisensory integration towards perceptual decision formation. HDDM demonstrated parsimonious fittings for characterising these behavioural discrepancies between YAs and OAs. Notably we found slower rates of sensory evidence accumulation (i.e., ↓ drift rates) for OAs across all sensory trial types, coupled with (1) higher rates of sensory evidence accumulation (i.e., ↑ drift rates) for OAs between AV versus V trial types irrespective of stimulus difficulty, coupled with (2) increased response caution (i.e., ↑ decision boundaries) between AV versus V trial types, and (3) decreased non-decisional processing duration (i.e., ↓ non-decision times) between AV versus V trial types for stimuli of increased difficulty respectively. Our findings suggest that older adults trade-off multisensory decision-making speed for accuracy to preserve enhancements towards perceptual decision formation relative to younger adults. Hence, they display an increased reliance on integrating multimodal information; through the principle of inverse effectiveness, as a compensatory mechanism for a generalised cognitive slowing when processing unisensory information. Overall, our findings demonstrate how computational modelling can reconcile contrasting hypotheses of age-related changes in processes underlying multisensory perceptual decision-making behaviour. Finally, Chapter 4 outlines research probing the exerted influence of perceptual learning on multisensory perceptual decision-making. Views of unisensory perceptual learning imply that improvements in perceptual sensitivity may be due to enhancements in early sensory representations and/or modulations to post-sensory decision dynamics. We sought to assess whether these views could account for improvements in perceptual sensitivity for multisensory stimuli, or even exacerbations of multisensory enhancements towards decision formation, by consolidating the spatiotemporal locus of where and when in the brain they may be observed. We recorded EEG activity from participants who completed the same audiovisual object categorisation paradigm (as outlined in Chapter 3), over three consecutive days. We used single-trial multivariate LDA to characterise the spatiotemporal trajectory of the decision dynamics underlying any observed multisensory benefits both (a) within and (b) between visual, auditory, and audiovisual trial types. While found significant decreases were found in RTs and increases in choice accuracy over testing days, we did not find any significant effects of perceptual learning on multisensory nor unisensory perceptual decision formation. Similarly, EEG analysis did not find any neural components indicative of early or late modulatory effects from perceptual learning in brain activity, which we attribute to (1) a long duration of stimulus presentations (300ms), and (2) a lack of sufficient statistical power for our LDA classifier to discriminate face-versus-car trial types. We end this chapter with considerations for discerning multisensory benefits towards perceptual decision formation, and recommendations for altering our experimental design to observe the effects of perceptual learning as a decision neuromodulator. These findings contribute to literature justifying the increasing relevance of utilising behaviourally-informed and/or neurally-informed modelling approaches for investigating multisensory perceptual decision-making. In particular, a discussion of the underlying cognitive and/or neural mechanisms that can be attributed to the benefits of multisensory integration towards perceptual decision formation, as well as the modulatory impact of the decision modulators in question, can contribute to a theoretical reconciliation that multisensory integrative benefits are not ubiquitous to specific spatiotemporal neural dynamics nor cognitive processes

    Earth as Interface: Exploring chemical senses with Multisensory HCI Design for Environmental Health Communication

    Get PDF
    As environmental problems intensify, the chemical senses -that is smell and taste, are the most relevantsenses to evidence them.As such, environmental exposure vectors that can reach human beings comprise air,food, soil and water[1].Within this context, understanding the link between environmental exposures andhealth[2]is crucial to make informed choices, protect the environment and adapt to new environmentalconditions[3].Smell and taste lead therefore to multi-sensorial experiences which convey multi-layered information aboutlocal and global events[4]. However, these senses are usually absent when those problems are represented indigital systems. The multisensory HCIdesign framework investigateschemical sense inclusion withdigital systems[5]. Ongoing efforts tackledigitalization of smell and taste for digital delivery, transmission or substitution [6]. Despite experimentsproved technological feasibility, its dissemination depends on relevant applicationdevelopment[7].This thesis aims to fillthose gaps by demonstratinghow chemical senses provide the means to link environment and health based on scientific andgeolocation narratives [8], [9],[10]. We present a Multisensory HCI design process which accomplished symbolicdisplaying smell and taste and led us to a new multi-sensorial interaction system presented herein. We describe the conceptualization, design and evaluation of Earthsensum, an exploratory case study project.Earthsensumoffered to 16 participants in the study, environmental smell and taste experiences about real geolocations to participants of the study. These experiences were represented digitally using mobilevirtual reality (MVR) and mobile augmented reality (MAR). Its technologies bridge the real and digital Worlds through digital representations where we can reproduce the multi-sensorial experiences. Our study findings showed that the purposed interaction system is intuitive and can lead not only to a betterunderstanding of smell and taste perception as also of environmental problems. Participants comprehensionabout the link between environmental exposures and health was successful and they would recommend thissystem as education tools. Our conceptual design approach was validated and further developments wereencouraged.In this thesis,we demonstratehow to applyMultisensory HCI methodology to design with chemical senses. Weconclude that the presented symbolic representation model of smell and taste allows communicatingtheseexperiences on digital platforms. Due to its context-dependency, MVR and MAR platforms are adequatetechnologies to be applied for this purpose.Future developments intend to explore further the conceptual approach. These developments are centredon the use of the system to induce hopefully behaviourchange. Thisthesisopens up new application possibilities of digital chemical sense communication,Multisensory HCI Design and environmental health communication.À medida que os problemas ambientais se intensificam, os sentidos quĂ­micos -isto Ă©, o cheiroe sabor, sĂŁo os sentidos mais relevantes para evidenciĂĄ-los. Como tais, os vetores de exposição ambiental que podem atingir os seres humanos compreendem o ar, alimentos, solo e ĂĄgua [1]. Neste contexto, compreender a ligação entre as exposiçÔes ambientais e a saĂșde [2] Ă© crucial para exercerescolhas informadas, proteger o meio ambiente e adaptar a novas condiçÔes ambientais [3]. O cheiroe o saborconduzemassima experiĂȘncias multissensoriais que transmitem informaçÔes de mĂșltiplas camadas sobre eventos locais e globais [4]. No entanto, esses sentidos geralmente estĂŁo ausentes quando esses problemas sĂŁo representados em sistemas digitais. A disciplina do design de Interação Humano-Computador(HCI)multissensorial investiga a inclusĂŁo dossentidos quĂ­micos em sistemas digitais [9]. O seu foco atual residena digitalização de cheirose sabores para o envio, transmissĂŁo ou substituiçãode sentidos[10]. Apesar dasexperimentaçÔescomprovarem a viabilidade tecnolĂłgica, a sua disseminação estĂĄ dependentedo desenvolvimento de aplicaçÔes relevantes [11]. Estatese pretendepreencher estas lacunas ao demonstrar como os sentidos quĂ­micos explicitama interconexĂŁoentre o meio ambiente e a saĂșde, recorrendo a narrativas cientĂ­ficas econtextualizadasgeograficamente[12], [13], [14]. Apresentamos uma metodologiade design HCImultissensorial que concretizouum sistema de representação simbĂłlica de cheiro e sabor e nos conduziu a um novo sistema de interação multissensorial, que aqui apresentamos. Descrevemos o nosso estudo exploratĂłrio Earthsensum, que integra aconceptualização, design e avaliação. Earthsensumofereceu a 16participantes do estudo experiĂȘncias ambientais de cheiro e sabor relacionadas com localizaçÔes geogrĂĄficasreais. Essas experiĂȘncias foram representadas digitalmente atravĂ©s derealidade virtual(VR)e realidade aumentada(AR).Estas tecnologias conectamo mundo real e digital atravĂ©s de representaçÔes digitais onde podemos reproduzir as experiĂȘncias multissensoriais. Os resultados do nosso estudo provaramque o sistema interativo proposto Ă© intuitivo e pode levar nĂŁo apenas a uma melhor compreensĂŁo da perceção do cheiroe sabor, como tambĂ©m dos problemas ambientais. O entendimentosobre a interdependĂȘncia entre exposiçÔes ambientais e saĂșde teve ĂȘxitoe os participantes recomendariam este sistema como ferramenta para aeducação. A nossa abordagem conceptual foi positivamentevalidadae novos desenvolvimentos foram incentivados. Nesta tese, demonstramos como aplicar metodologiasde design HCImultissensorialpara projetar com ossentidos quĂ­micos. Comprovamosque o modelo apresentado de representação simbĂłlica do cheiroe do saborpermite comunicar essas experiĂȘnciasem plataformas digitais. Por serem dependentesdocontexto, as plataformas de aplicaçÔes emVR e AR sĂŁo tecnologias adequadaspara este fim.Desenvolvimentos futuros pretendem aprofundar a nossa abordagemconceptual. Em particular, aspiramos desenvolvera aplicaçãodo sistema para promover mudanças de comportamento. Esta tese propĂ”enovas possibilidades de aplicação da comunicação dos sentidos quĂ­micos em plataformas digitais, dedesign multissensorial HCI e de comunicação de saĂșde ambiental

    Complexity, the auditory system, and perceptual learning in naĂŻve users of a visual-to-auditory sensory substitution device.

    Get PDF
    PhDSensory substitution devices are a non-invasive visual prostheses that use sound or touch to aid functioning in the blind. Algorithms informed by natural crossmodal correspondences convert and transmit sensory information attributed to an impaired modality back to the user via an unimpaired modality and utilise multisensory networks to activate visual areas of cortex. While behavioural success has been demonstrated in non-visual tasks suing SSDs how they utilise a metamodal brain, organised for function is still a question in research. While imaging studies have shown activation of visual cortex in trained users it is likely that naĂŻve users rely on auditory characteristics of the output signal for functionality and that it is perceptual learning that facilitates crossmodal plasticity. In this thesis I investigated visual-to-auditory sensory substitution in naĂŻve sighted users to assess whether signal complexity and processing in the auditory system facilitates and limits simple recognition tasks. In four experiments evaluating; signal complexity, object resolution, harmonic interference and information load I demonstrate above chance performance in naĂŻve users in all tasks, an increase in generalized learning, limitations in recognition due to principles of auditory scene analysis and capacity limits that hinder performance. Results are looked at from both theoretical and applied perspectives with solutions designed to further inform theory on a multisensory perceptual brain and provide effective training to aid visual rehabilitation.Queen Mary University of Londo

    Iconicity in Language and Speech

    Get PDF
    Die vorliegende Arbeit befasst sich mit dem großen Oberthema der IkonizitĂ€t und ihrer Verbreitung auf verschiedenen linguistischen Ebenen. IkonizitĂ€t bezeichnet die Ähnlichkeit zwischen der sprachlichen Form und ihrer Bedeutung (vgl. Perniss und Vigliocco, 2014). So wie eine Skulptur einem Objekt oder einer Person Ă€hnelt, kann auch der Klang oder die Form von Wörtern der Sache Ă€hneln, auf die sie verweisen. FrĂŒhere theoretische AnsĂ€tze betonen, dass die ArbitraritĂ€t von sprachlichen Zeichen und deren Bedeutung ein Hauptmerkmal menschlicher Sprache ist und IkonizitĂ€t fĂŒr die Sprachevolution eine Rolle gespielt haben mag, jedoch in der heutigen Sprache zu vernachlĂ€ssigen ist. Im Gegensatz dazu ist das Hauptanliegen dieser Arbeit, das Potenzial und die Bedeutung von IkonizitĂ€t in der heutigen Sprache zu untersuchen. Die einzelnen Kapitel der Dissertation können als separate Teile betrachtet werden, die in ihrer Gesamtheit das umfassende Spektrum der IkonizitĂ€t sichtbar machen. Von der sprachevolutionĂ€ren Debatte ausgehend wird in den einzelnen Kapiteln auf die unterschiedlichen Ebenen der IkonizitĂ€t eingegangen. Es werden experimentelle Untersuchungen zur Lautsymbolik, am Beispiel der deutschen PokĂ©mon-Namen, zur ikonischen Prosodie und zu ikonischen Wörtern, den sogenannten Ideophonen, vorgestellt. Die Ergebnisse der einzelnen Untersuchungen deuten auf die weite Verbreitung der IkonizitĂ€t im heutigen Deutsch hin. DarĂŒber hinaus entschlĂŒsselt diese Dissertation das kommunikative Potenzial der IkonizitĂ€t als eine Kraft, die nicht nur die Entstehung der Sprache ermöglichte, sondern auch nach Jahrtausenden bestehen bleibt, sich immer wieder neu entfaltet und uns tagtĂ€glich in mĂŒndlicher, schriftlicher Form und in Gesten begegnet.This dissertation is concerned with the major theme of iconicity and its prevalence on different linguistic levels. Iconicity refers to a resemblance between the linguistic form and the meaning of a referent (cf. Perniss and Vigliocco, 2014). Just like a sculpture resembles an object or a model, so can the sound or shape of words resemble the thing they refer to. Previous theoretical approaches emphasize that arbitrariness of the linguistic sign is one of the main features of human language; iconicity, however, may have played a role for language evolution, but is negligible in contemporary language. In contrast, the main point of this thesis is to explore the potential and the importance of iconicity in the language nowadays. The individual chapters of the dissertation can be viewed as separate parts that, taken together, reveal the comprehensive spectrum of iconicity. Starting from the language evolutionary debate, the individual chapters address iconicity on different linguistic levels. I present experimental evidence on sound symbolism, using the example of German PokĂ©mon names, on iconic prosody, and on iconic words, the so-called ideophones. The results of the individual investigations point to the widespread use of iconicity in contemporary German. Moreover, this dissertation deciphers the communicative potential of iconicity as a force that not only enabled the emergence of language, but also persists after millennia, unfolding again and again and encountering us every day in speech, writing, and gestures

    Audio-visual interactions in manual and saccadic responses

    Get PDF
    Chapter 1 introduces the notions of multisensory integration (the binding of information coming from different modalities into a unitary percept) and multisensory response enhancement (the improvement of the response to multisensory stimuli, relative to the response to the most efficient unisensory stimulus), as well as the general goal of the present thesis, which is to investigate different aspects of the multisensory integration of auditory and visual stimuli in manual and saccadic responses. The subsequent chapters report experimental evidence of different factors affecting the multisensory response: spatial discrepancy, stimulus salience, congruency between cross-modal attributes, and the inhibitory influence of concurring distractors. Chapter 2 reports three experiments on the role of the superior colliculus (SC) in multisensory integration. In order to achieve this, the absence of S-cone input to the SC has been exploited, following the method introduced by Sumner, Adamjee, and Mollon (2002). I found evidence that the spatial rule of multisensory integration (Meredith & Stein, 1983) applies only to SC-effective (luminance-channel) stimuli, and does not apply to SC-ineffective (S-cone) stimuli. The same results were obtained with an alternative method for the creation of S-cone stimuli: the tritanopic technique (Cavanagh, MacLeod, & Anstis, 1987; Stiles, 1959; Wald, 1966). In both cases significant multisensory response enhancements were obtained using a focused attention paradigm, in which the participants had to focus their attention on the visual modality and to inhibit responses to auditory stimuli. Chapter 3 reports two experiments showing the influence of shape congruency between auditory and visual stimuli on multisensory integration; i.e. the correspondence between structural aspects of visual and auditory stimuli (e.g., spiky shape and “spiky” sounds). Detection of audio-visual events was faster for congruent than incongruent pairs, and this congruency effect occurred also in a focused attention task, where participants were required to respond only to visual targets and could ignore irrelevant auditory stimuli. This particular type of cross-modal congruency was been evaluated in relation to the inverse effectiveness rule of multisensory integration (Meredith & Stein, 1983). In Chapter 4, the locus of the cross-modal shape congruency was evaluated applying the race model analysis (Miller, 1982). The results showed that the violation of the model is stronger for some congruent pairings in comparison to incongruent pairings. Evidence of multisensory depression was found for some pairs of incongruent stimuli. These data imply a perceptual locus for the cross-modal shape congruency effect. Moreover, it is evident that multisensoriality does not always induce an enhancement, and in some cases, when the attributes of the stimuli are particularly incompatible, a unisensory response may be more effective that the multisensory one. Chapter 5 reports experiments centred on saccadic generation mechanisms. Specifically, the multisensoriality of the saccadic inhibition (SI; Reingold&Stampe, 2002) phenomenon is investigated. Saccadic inhibition refers to a characteristic inhibitory dip in saccadic frequency beginning 60-70 ms after onset of a distractor. The very short latency of SI suggests that the distractor interferes directly with subcortical target selection processes in the SC. The impact of multisensory stimulation on SI was studied in four experiments. In Experiments 7 and 8, a visual target was presented with a concurrent audio, visual or audio-visual distractor. Multisensory audio-visual distractors induced stronger SI than did unisensory distractors, but there was no evidence of multisensory integration (as assessed by a race model analysis). In Experiments 9 and 10, visual, auditory or audio-visual targets were accompanied by a visual distractor. When there was no distractor, multisensory integration was observed for multisensory targets. However, this multisensory integration effect disappeared in the presence of a visual distractor. As a general conclusion, the results from Chapter 5 results indicate that multisensory integration occurs for target stimuli, but not for distracting stimuli, and that the process of audio-visual integration is itself sensitive to disruption by distractors

    Practicing phonomimetic (conducting-like) gestures facilitates vocal performance of typically developing children and children with autism: an experimental study

    Full text link
    Every music teacher is likely to teach one or more children with autism, given that an average of one in 54 persons in the United States receives a diagnosis of Autism Spectrum Disorder (ASD). ASD persons often show tremendous interest in music, and some even become masterful performers; however, the combination of deficits and abilities associated with ASD can pose unique challenges for music teachers. This experimental study shows that phonomimetic (conducting-like) gestures can be used to teach the expressive qualities of music. Children were asked to watch video recordings of conducting-like gestures and produce vocal sounds to match the gestures. The empirical findings indicate that motor training can strengthen the visual to vocomotor couplings in both populations, suggesting that phonomimetic gesture may be a suitable approach for teaching musical expression in inclusive classrooms

    Varieties of Attractiveness and their Brain Responses

    Get PDF
    • 

    corecore