586 research outputs found

    Auditory conflict and congruence in frontotemporal dementia.

    Get PDF
    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes

    Learning to see and hear in 3D: Virtual reality as a platform for multisensory perceptual learning

    Get PDF
    Virtual reality (VR) is an emerging technology which allows for the presentation of immersive and realistic yet tightly controlled audiovisual scenes. In comparison to conventional displays, the VR system can include depth, 3D audio, fully integrated eye, head, and hand tracking, all over a much larger field of view than a desktop monitor provides. These properties demonstrate great potential for use in vision science experiments, especially those that can benefit from more naturalistic stimuli, particularly in the case of visual rehabilitation. Prior work using conventional displays has demonstrated that that visual loss due to stroke can be partially rehabilitated through laboratory-based tasks designed to promote long-lasting changes to visual sensitivity. In this work, I will explore how VR can provide a platform for new, more complex training paradigms which leverage multisensory stimuli. In this dissertation, I will (I) provide context to motivate the use of multisensory perceptual training in the context of visual rehabilitation, (II) demonstrate best practices for the appropriate use of VR in a controlled psychophysics setting, (III) describe a prototype integrated hardware system for improved eye tracking in VR, and (IV, V) discuss results from two audiovisual perceptual training studies, one using multisensory stimuli and the other with cross-modal audiovisual stimuli. This dissertation provides the foundation for future work in rehabilitating visual deficits, by both improving the hardware and software systems used to present the training paradigm as well as validating new techniques which use multisensory training not previously accessible with conventional desktop displays

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    Attention Restraint, Working Memory Capacity, and Mind Wandering: Do Emotional Valence or Intentionality Matter?

    Get PDF
    Attention restraint appears to mediate the relationship between working memory capacity (WMC) and mind wandering (Kane et al., 2016). Prior work has identifed two dimensions of mind wandering—emotional valence and intentionality. However, less is known about how WMC and attention restraint correlate with these dimensions. Te current study examined the relationship between WMC, attention restraint, and mind wandering by emotional valence and intentionality. A confrmatory factor analysis demonstrated that WMC and attention restraint were strongly correlated, but only attention restraint was related to overall mind wandering, consistent with prior fndings. However, when examining the emotional valence of mind wandering, attention restraint and WMC were related to negatively and positively valenced, but not neutral, mind wandering. Attention restraint was also related to intentional but not unintentional mind wandering. Tese results suggest that WMC and attention restraint predict some, but not all, types of mind wandering

    The Role of Cognitive Effort in Decision Performance Using Data Representations :;a Cognitive Fit Perspective

    Get PDF
    A major goal of Decision Support (DSS) and Business Intelligence (BI) systems is to aid decision makers in their decision performance by reducing effort. One critical part of those systems is their data representation component of visually intensive applications such as dashboards and data visualization. The existing research led to a number of theoretical approaches that explain decision performance through data representation\u27s impact on users\u27 cognitive effort, with Cognitive Fit Theory (CFT) being the most influential theoretical lens. However, available CFT-based literature findings are inconclusive and there is a lack of research that actually attempts to measure cognitive effort, the mechanism underlying CFT and CFT-based literature. This research is the first one to directly measure cognitive effort in Cognitive Fit and Business Information Visualization context and the first one to evaluate both self-reported and physiological measures of cognitive effort. The research provides partial support for CFT by confirming that task characteristics and data representation do influence cognitive effort. This influence is pronounced for physiological measures of cognitive effort while it minimal for self-reported measure of cognitive effort. While cognitive effort was found to have an impact on decision time, this research suggests caution is assuming that task-representation fit is influencing decision accuracy. Furthermore, this level of impact varies between self-reported and physiological cognitive effort and is influenced by task complexity. Research provides extensive cognitive fit theory, business information visualization and cognitive effort literature review along with implications of the findings for both research and practic

    Auditory perspective: perception, rendering, and applications

    Get PDF
    Nell'apprezzare gli ambienti acustici, la percezione della distanza e cruciale tanto quanto la lateralizzazione. Ancorche sia stato condotto del lavoro di ricerca sulla percezione della distanza, i moderni display uditivi non traggono ancora vantag- gio da cio al ne di fornire dell'informazione addizionale sulla disposizione nello spazio delle sorgenti acustiche in modo da arricchirsi, di conseguenza, di contenuto e qualita. Quando si progetta un display uditivo si deve tener conto dell'obiettivo dell'applicazione data e delle risorse disponibili al ne di scegliere l'approccio ot- timale. In particolare, la resa della prospettiva acustica fornisce un ordinamento gerarchico delle sorgenti sonore e permette di focalizzare l'attenzione dell'utente sulla sorgente piu vicina. A parte cio, quando i dati visuali non sono piu disponibili in quanto al di fuori del campo visivo o perche l'utente e al buio, ovvero perche e bene non adoperarli per ridurre il carico sull'attenzione visiva, il rendering uditivo deve convogliare tutta l'informazione spaziale inclusa la distanza. Questo lavoro di ricerca intende studiare la profondita acustica (sorgenti sonore dislocate di fronte all'ascoltatore) in termini di percezione, resa, e applicazioni all'interazione uomo- macchina. Dapprima si propone una rassegna degli aspetti piu importanti della percezione uditiva della distanza. Le indagini sulla percezione della distanza sono molto piu avanzate nel campo della visione, in quanto hanno gia trovato applicazioni nelle tecnologie di visualizzazione. Da cio, sembrerebbe naturale fornire la stessa in- formazione nel dominio uditivo per aumentare il grado di realismo del display complessivo. La percezione della profondita di fatto puo essere facilitata combi- nando indizi visuali e uditivi. Vengono riportati alcuni risultati di rilievo della letteratura sugli eetti dell'interazione audio-visiva, e illustrati due esperimenti sulla percezione della profondita audio-visiva. In particolare, e stata indagata l'in uenza degli indizi uditivi sull'ordinamento visuo-spaziale percepito. I risultati mostrano che la manipolazione dell'intensita acustica non in uisce sulla percezione dell'ordinamento lungo l'asse della profondita, un'evidenza dovuta probabilmente alla mancanza di integrazione multisensoriale. Inoltre, introducendo un ritardo tra i due stimoli audiovisuali, il secondo esperimento ha rivelato un eetto legato all'ordine temporale dei due stimoli visivi. Tra le tecniche esistenti per la spazializzazione della sorgente acustica lungo la dimenzione della profondita esiste uno studio che ha proposto un modello di tubo virtuale, basato sull'esagerazione del riverbero all'interno di questo ambiente. La tecnica di progetto segue un approccio a modelli sici e fa uso della Digital Waveg- uide Mesh (DWM) rettangolare 3D, la quale ha gia evidenziato la sua capacita di simulare ambienti acustici complessi in larga scala. La DMW 3D e troppo aamata di risorse per la simulazione in tempo reale di ambienti 3D di dimensioni accetta- bili. Ancorche una decimazione possa aiutare a ridurre il carico computazionale sulla CPU, un'alternativa piu eciente e quella di adoperare un modello 2D che, conseguentemente, simula una membrana. Sebbene suoni meno naturale delle sim- ulazioni in 3D, lo spazio acustico bidimensionale risultante presenta proprieta simili specialmente rispetto alla resa della profondita. Questo lavoro di ricerca dimostra anche che l'acustica virtuale permette di plas- mare la percezione della distanza e, in particolare, di compensare la nota com- pressione delle stime soggettive di distanza. A tale scopo si e proposta una DWM bidimensionale trapezoidale come ambiente virtuale capace di fornire una relazione lineare tra distanza sica e percepita. Sono stati poi condotti tre test d'ascolto per misurarne la linearita. Peraltro essi hanno dato vita a una nuova procedura di test che deriva dal test MUSHRA, adatta a condurre un confronto diretto di distanze multiple. Nello specico essa riduce la variabilita della risposta a confronto della procedura di stima di grandezze dirette. Le implementazioni in tempo reale della DWM 2D rettangolare sono state re- alizzate in forma di oggetti \external" per Max/MSP. Il primo external per- mette di rendere una o piu sorgenti acustiche statiche dislocate a diverse dis- tanze dall'ascoltatore, mentre il secondo external simula una sorgente sonora in movimento lungo la dimensione della profondita, una sorgente cioe in avvicina- mento/allontanamento. Come applicazione del primo external e stata proposta un'interfaccia audio-tattile. L'interfaccia tattile comprende un sensore lineare di posizione fatto di materiale conduttivo. La posizione del tocco sulla fascetta viene mappata sulla posizione d'ascolto di una membrana virtuale rettangolare modellata dalla DWM 2D, la quale fornisce indizi di profondita per quattro sorgenti egualmente spaziate. In ag- giunta a cio si dopera la manopola di un controller MIDI per variare la posizione della membrana lungo l'elenco dei suoni, permettendo cos di passare in rassegna l'intero insieme di suoni muovendosi avanti e indietro lungo la nestra audio cos- tituita dalla membrana virtuale. I soggetti coinvolti nella valutazione d'uso hanno avuto successo nel trovare tutti i le audio deniti come target, cos come giudicato l'interfaccia intuitiva e gradevole. Inoltre e stata realizzata un'altra dimostrazione dell'interfaccia audio-tattile adoperando modelli sici per il suono. Suoni di es- perienza quotidiana derivanti da eventi quali \friggere", \bussare", \sgocciolare" sono stati adoperati in modo che sia la creazione del suono che la sua resa in profon- dita fossero il risultato di una sintesi per modelli sici, ipotizzando che l'approccio di tipo ecologico potesse fornire un'interazione di tipo intuitivo. Inne, \DepThrow" e un gioco audio basato sull'utilizzo della DWM 2D per ren- dere indizi di profondita di una sorgente acustica dinamica. Il gioco consiste nel lanciare una palla virtuale, modellata da un modello sico di suoni di rotolamento, all'interno di un tubo virtuale inclinato e aperto alle estremita, modellato da una DWM 2D. L'obiettivo e fare rotolare la palla quanto piu in la nel tubo senza farla cadere all'estremita lontana. Dimostrato come un gioco, questo prototipo e stato pensato anche come strumento per condurre indagini sulla percezione della dis- tanza dinamica. I risultati preliminari di un test d'ascolto condotto sulla percezione della distanza variabile all'interno del tubo virtuale, hanno mostrato che la durata del rotolamento della palla in uenza la stima della distanza raggiunta.In our appreciation of auditory environments, distance perception is as crucial as lateralization. Although research work has been carried out on distance percep- tion, modern auditory displays do not yet take advantage of it to provide additional information on the spatial layout of sound sources and as a consequence enrich their content and quality. When designing a spatial auditory display, one must take into account the goal of the given application and the resources available in order to choose the optimal approach. In particular, rendering auditory perspec- tive provides a hierarchical ordering of sound sources and allows to focus the user attention on the closest sound source. Besides, when visual data are no longer available, either because they are out of the visual eld or the user is in the dark, or should be avoided to reduce the load of visual attention, auditory rendering must convey all the spatial information, including distance. The present research work aims at studying auditory depth (i.e. sound sources displayed straight ahead of the listener) in terms of perception, rendering and applications in human com- puter interaction. First, an overview is given of the most important aspects of auditory distance perception. Investigations on depth perception are much more advanced in vision since they already found applications in computer graphics. Then it seems nat- ural to give the same information in the auditory domain to increase the degree of realism of the overall display. Depth perception may indeed be facilitated by combining both visual and auditory cues. Relevant results from past literature on audio-visual interaction eects are reported, and two experiments were carried out on the perception of audio-visual depth. In particular, the in uence of auditory cues on the perceived visual layering in depth was investigated. Results show that auditory intensity manipulation does not aect the perceived order in depth, which is most probably due to the lack of multisensory integration. Besides, the second experiment, which introduced a delay between the two auditory-visual stimuli, re- vealed an eect of the temporal order of the two visual stimuli. Among existing techniques for sound source spatialization along the depth di- mension, a previous study proposed the modeling of a virtual pipe, based on the exaggeration of reverberation in such an environment. The design strategy follows a physics-based modeling approach and makes use of a 3D rectangular Digital Waveguide Mesh (DWM), which had already shown its ability to simulate complex, large-scale acoustical environments. The 3D DWM resulted to be too resource consuming for real-time simulations of 3D environments of decent size. While downsampling may help in reducing the CPU processing load, a more ef- cient alternative is to use a model in 2D, consequently simulating a membrane. Although sounding less natural than 3D simulations, the resulting bidimensional audio space presents similar properties, especially for depth rendering. The research work has also shown that virtual acoustics allows to shape depth perception and in particular to compensate for the usual compression of distance estimates. A trapezoidal bidimensional DWM is proposed as a virtual environment able to provide a linear relationship between perceived and physical distance. Three listening tests were conducted to assess the linearity. They also gave rise to a new test procedure deriving from the MUSHRA test and which is suitable for direct comparison of multiple distances. In particular, it reduces the response variability in comparison with the direct magnitude estimation procedure. Real-time implementations of the rectangular 2D DWM have been realized as Max/MSP external objects. The rst external allows to render in depth one or more static sound sources located at dierent distances from the listener, while the second external simulates one moving sound source along the depth dimension, i.e. an approaching/receding source. As an application of the rst external, an audio-tactile interface for sound naviga- tion has been proposed. The tactile interface includes a linear position sensor made by conductive material. The touch position on the ribbon is mapped onto the lis- tening position on a rectangular virtual membrane, modeled by the 2D DWM and providing depth cues of four equally spaced sound sources. Furthermore the knob of a MIDI controller controls the position of the mesh along the playlist, which allows to browse a whole set of les by moving back and forth the audio window resulting from the virtual membrane. Subjects involved in a user study succeeded in nding all the target les, and found the interface intuitive and entertaining. Furthermore, another demonstration of the audio-tactile interface was realized, using physics-based models of sounds. Everyday sounds of \frying", \knocking" and \liquid dripping" are used such that both sound creation and depth rendering are physics-based. It is believed that this ecological approach provides an intuitive interaction. Finally, \DepThrow" is an audio game, based on the use of the 2D DWM to render depth cues of a dynamic sound source. The game consists in throwing a virtual ball (modeled by a physics-based model of rolling sound) inside a virtual tube (modeled by a 2D DWM) which is open-ended and tilted. The goal is to make the ball roll as far as possible in the tube without letting it fall out at the far end. Demonstrated as a game, this prototype is also meant to be a tool for investi- gations on the perception of dynamic distance. Preliminary results of a listening test on the perception of distance motion in the virtual tube showed that duration of the ball's movement in uences the estimation of the distance reached by the rolling ball

    The Role of Cognitive Effort in Decision Performance Using Data Representations :;a Cognitive Fit Perspective

    Get PDF
    A major goal of Decision Support (DSS) and Business Intelligence (BI) systems is to aid decision makers in their decision performance by reducing effort. One critical part of those systems is their data representation component of visually intensive applications such as dashboards and data visualization. The existing research led to a number of theoretical approaches that explain decision performance through data representation\u27s impact on users\u27 cognitive effort, with Cognitive Fit Theory (CFT) being the most influential theoretical lens. However, available CFT-based literature findings are inconclusive and there is a lack of research that actually attempts to measure cognitive effort, the mechanism underlying CFT and CFT-based literature. This research is the first one to directly measure cognitive effort in Cognitive Fit and Business Information Visualization context and the first one to evaluate both self-reported and physiological measures of cognitive effort. The research provides partial support for CFT by confirming that task characteristics and data representation do influence cognitive effort. This influence is pronounced for physiological measures of cognitive effort while it minimal for self-reported measure of cognitive effort. While cognitive effort was found to have an impact on decision time, this research suggests caution is assuming that task-representation fit is influencing decision accuracy. Furthermore, this level of impact varies between self-reported and physiological cognitive effort and is influenced by task complexity. Research provides extensive cognitive fit theory, business information visualization and cognitive effort literature review along with implications of the findings for both research and practic

    The role of phonology in visual word recognition: evidence from Chinese

    Get PDF
    Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin

    Interactive effects of orthography and semantics in Chinese picture naming

    Get PDF
    Posters - Language Production/Writing: abstract no. 4035Picture-naming performance in English and Dutch is enhanced by presentation of a word that is similar in form to the picture name. However, it is unclear whether facilitation has an orthographic or a phonological locus. We investigated the loci of the facilitation effect in Cantonese Chinese speakers by manipulating—at three SOAs (2100, 0, and 1100 msec)—semantic, orthographic, and phonological similarity. We identified an effect of orthographic facilitation that was independent of and larger than phonological facilitation across all SOAs. Semantic interference was also found at SOAs of 2100 and 0 msec. Critically, an interaction of semantics and orthography was observed at an SOA of 1100 msec. This interaction suggests that independent effects of orthographic facilitation on picture naming are located either at the level of semantic processing or at the lemma level and are not due to the activation of picture name segments at the level of phonological retrieval.postprin
    • …
    corecore