64 research outputs found

    Audio-visual interaction in emotion perception for communication

    Get PDF
    Information from multiple modalities contributes to recognizing emotions. While it is known interactions occur between modalities, it is unclear what characterizes these. These interactions, and changes in these interactions due to sensory impairments, are the main subject of this PhD project. This extended abstract for the Doctoral Symposium of ETRA 2018 describes the project; its background, what I hope to achieve, and some preliminary results.</p

    Surface profile determination from multiple sonar data using morphological processing

    Get PDF
    This paper presents a novel method for surface profile determination using multiple sensors. Our approach is based on morphological processing techniques to fuse the range data from multiple sensor returns in a manner that directly reveals the target surface profile. The method has the intrinsic ability of suppressing spurious readings due to noise, crosstalk, and higher-order reflections, as well as processing multiple reflections informatively. The approach taken is extremely flexible and robust, in addition to being simple and straightforward. It can deal with arbitrary numbers and configurations of sensors as well as synthetic arrays. The algorithm is verified both by simulating and experiments in the laboratory by processing real sonar data obtained from a mobile robot. The results are compared to those obtained from a more accurate structured-light system, which is, however, more complex and expensive

    Visual and auditory temporal integration in healthy younger and older adults

    Get PDF
    As people age, they tend to integrate successive visual stimuli over longer intervals than younger adults. It may be expected that temporal integration is affected similarly in other modalities, possibly due to general, age-related cognitive slowing of the brain. However, the previous literature does not provide convincing evidence that this is the case in audition. One hypothesis is that the primacy of time in audition attenuates the degree to which temporal integration in that modality extends over time as a function of age. We sought to settle this issue by comparing visual and auditory temporal integration in younger and older adults directly, achieved by minimizing task differences between modalities. Participants were presented with a visual or an auditory rapid serial presentation task, at 40-100 ms/item. In both tasks, two subsequent targets were to be identified. Critically, these could be perceptually integrated and reported by the participants as such, providing a direct measure of temporal integration. In both tasks, older participants integrated more than younger adults, especially when stimuli were presented across longer time intervals. This difference was more pronounced in vision and only marginally significant in audition. We conclude that temporal integration increases with age in both modalities, but that this change might be slightly less pronounced in audition

    Game theoretical semantics for some non-classical logics

    Get PDF
    Paraconsistent logics are the formal systems in which absurdities do not trivialise the logic. In this paper, we give Hintikka-style game theoretical semantics for a variety of paraconsistent and non-classical logics. For this purpose, we consider Priest’s Logic of Paradox, Dunn’s First-Degree Entailment, Routleys’ Relevant Logics, McCall’s Connexive Logic and Belnap’s four-valued logic. We also present a game theoretical characterisation of a translation between Logic of Paradox/Kleene’s K3 and S5. We underline how non-classical logics require different verification games and prove the correctness theorems of their respective game theoretical semantics. This allows us to observe that paraconsistent logics break the classical bidirectional connection between winning strategies and truth values

    Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users

    Get PDF
    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users

    The musician effect:does it persist under degraded pitch conditions of cochlear implant simulations?

    Get PDF
    Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This "musician effect" was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception

    Musician effect in cochlear implant simulated gender categorization

    Get PDF
    Musicians have been shown to better perceive pitch and timbre cues in speech and music, compared to non-musicians. It is unclear whether this "musician advantage" persists under conditions of spectro-temporal degradation, as experienced by cochlear-implant (CI) users. In this study, gender categorization was measured in normal-hearing musicians and non-musicians listening to acoustic CI simulations. Recordings of Dutch words were synthesized to systematically vary fundamental frequency, vocal-tract length, or both to create voices from the female source talker to a synthesized male talker. Results showed an overall musician effect, mainly due to musicians weighting fundamental frequency more than non-musicians in CI simulations. (C) 2014 Acoustical Society of Americ

    A history based logic for dynamic preference updates

    Get PDF
    History based models suggest a process-based approach to epistemic and temporal reasoning. In this work, we introduce preferences to history based models. Motivated by game theoretical observations, we discuss how preferences can dynamically be updated in history based models. Following, we consider arrow update logic and event calculus, and give history based models for these logics. This allows us to relate dynamic logics of history based models to a broader framework

    Recognition of interrupted sentences under conditions of spectral degradation

    No full text
    Cochlear implant (CI) and normally hearing (NH) listeners' recognition of periodically interrupted sentences was investigated. CI listeners' scores declined drastically when the sentences were interrupted. The NH listeners showed a significant decline in performance with increasing spectral degradation using CI-simulated, noise-band-vocoded speech. It is inferred that the success of top-down processes necessary for the perceptual reconstruction of interrupted speech is limited by even mild degradations of the bottom-up information stream (16 and 24 band processing). A hypothesis that the natural voice-pitch variations in speech would help in the perceptual reconstruction of the sentences was not supported by experimental results
    corecore