9 research outputs found

    The feasibility of the dual-task paradigm as a framework for a clinical test of listening effort in cochlear implant users

    Get PDF
    The overall aim of this thesis is to evaluate the feasibility of using the behavioural framework of the dual-task paradigm as the basis of a clinical test of listening effort (LE) in cochlear implant (CI) users. It is hypothesised that, if a primary listening task is performed together with a secondary visual task, performance in the visual task will deteriorate as the listening task becomes harder. This deterioration in secondary visual task performance can then provide an index of LE. An initial series of six experiments progressively modified the dual-task design (in an attempt to optimise its sensitivity to LE), leading to the selection of British English Lexicon (BEL) sentences for the listening task and a digit stream visual task. A further three experiments applied this dual-task to 30 normal hearing (NH) participants listening to normal speech, 30 NH participants listening to CI simulations, and 25 CI users listening through their speech processors. Performance in quiet conditions was compared to that in different levels of background noise. Adaptive tracking procedures were used in an attempt to ensure that the challenge of noise was equal for all participants. This principle was also applied to equalise difficulty in terms of the number of channels used in the spectral resolution of the CI simulations. As expected, NH participants only exhibited significant deterioration in visual accuracy when noise was present (p<.001), suggesting increased LE. Interestingly, however, when CI simulations were applied, this significant visual deterioration occurred immediately in quiet (p<.001). The same result occurred in quiet for the CI users too (p<.001). Therefore, it appears that the degraded auditory input provided by CI induces LE even in optimal listening conditions. These results suggest that the dual-task paradigm could feasibly become a framework for developing a clinical test of LE in the CI user population

    Interpreting Electronic Voice Phenomena: The role of auditory perception, paranormal belief and individual differences

    Get PDF
    Electronic Voice Phenomena are anomalous voices that appear on audio recordings (BarusÌŒs, 2001) and various techniques have been suggested for obtaining these voices. People who investigate potentially paranormal, site based anomalies (ghosthunters) have in recent years been using techniques to obtain EVP voices, and declaring them as proof of the paranormal. Previous studies have examined the role of paranormal belief on various personality factors and on cognition, however individuals who use EVP as a technique (high-EVPers) have not previously been studied to ascertain if they differ from both the sceptical population (non-EVPers) and people who believe in the paranormal but who do not use EVP techniques (low-EVPers). The current studies examined personality variable differences between non-, low and high-EVPers. A new questionnaire, the Paranormal Investigation Experience Questionnaire, proved capable of differentiating between non-, low- and high-EVPers, and displayed high reliability. From the current studies, it does not appear that EVPers can be classified as a separate group of individuals when compared with general paranormal believers when comparing personality traits. It is possible to define them as a group based on their experiences of EVP, but this separation is not found when investigating a number of individual difference measures which have been shown to be able to distinguish between general paranormal believers and non-believers. EVPers demonstrated higher levels of sleep related hallucinations, which may have implications for how they are interpreting noise as EVP voices. There was a commonality in auditory test results between a number of personality factors, individuals high in these measures were all more likely to report hearing non-directional voices in noise, which may have implications for how EVPers are interpreting sound clips depending on how they are listening to those clips. High hallucinators reported hallucinated voices in their right ear, which supports previous research. The results suggest that a number of factors are involved in causing misperception of voices in noise, but these results may be applicable to the general population rather than specifically to a population of EVP experiencers. Suggestions as to future research and comparison with other methods of apparent paranormal communication are discussed

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Context-aware speech synthesis: A human-inspired model for monitoring and adapting synthetic speech

    Get PDF
    The aim of this PhD thesis is to illustrate the development a computational model for speech synthesis, which mimics the behaviour of human speaker when they adapt their production to their communicative conditions. The PhD project was motivated by the observed differences between state-of-the- art synthesiser’s speech and human production. In particular, synthesiser outcome does not exhibit any adaptation to communicative context such as environmental disturbances, listener’s needs, or speech content meanings, as the human speech does. No evaluation is performed by standard synthesisers to check whether their production is suitable for the communication requirements. Inspired by Lindblom's Hyper and Hypo articulation theory (H&H) theory of speech production, the computational model of Hyper and Hypo articulation theory (C2H) is proposed. This novel computational model for automatic speech production is designed to monitor its outcome and to be able to control the effort involved in the synthetic speech generation. Speech transformations are based on the hypothesis that low-effort attractors for a human speech production system can be identified. Such acoustic configurations are close to minimum possible effort that a speaker can make in speech production. The interpolation/extrapolation along the key dimension of hypo/hyper-articulation can be motivated by energetic considerations of phonetic contrast. The complete reactive speech synthesis is enabled by adding a negative perception feedback loop to the speech production chain in order to constantly assess the communicative effectiveness of the proposed adaptation. The distance to the original communicative intents is the control signal that drives the speech transformations. A hidden Markov model (HMM)-based speech synthesiser along with the continuous adaptation of its statistical models is used to implement the C2H model. A standard version of the synthesis software does not allow for transformations of speech during the parameter generation. Therefore, the generation algorithm of one the most well-known speech synthesis frameworks, HMM/DNN-based speech synthesis framework (HTS), is modified. The short-time implementation of speech intelligibility index (SII), named extended speech intelligibility index (eSII), is also chosen as the main perception measure in the feedback loop to control the transformation. The effectiveness of the proposed model is tested by performing acoustic analysis, objective, and subjective evaluations. A key assessment is to measure the control of the speech clarity in noisy condition, and the similarities between the emerging modifications and human behaviour. Two objective scoring methods are used to assess the speech intelligibility of the implemented system: the speech intelligibility index (SII) and the index based upon the Dau measure (Dau). Results indicate that the intelligibility of C2H-generated speech can be continuously controlled. The effectiveness of reactive speech synthesis and of the phonetic contrast motivated transforms is confirmed by the acoustic and objective results. More precisely, in the maximum-strength hyper-articulation transformations, the improvement with respect to non-adapted speech is above 10% for all intelligibility indices and tested noise conditions

    Attention Restraint, Working Memory Capacity, and Mind Wandering: Do Emotional Valence or Intentionality Matter?

    Get PDF
    Attention restraint appears to mediate the relationship between working memory capacity (WMC) and mind wandering (Kane et al., 2016). Prior work has identifed two dimensions of mind wandering—emotional valence and intentionality. However, less is known about how WMC and attention restraint correlate with these dimensions. Te current study examined the relationship between WMC, attention restraint, and mind wandering by emotional valence and intentionality. A confrmatory factor analysis demonstrated that WMC and attention restraint were strongly correlated, but only attention restraint was related to overall mind wandering, consistent with prior fndings. However, when examining the emotional valence of mind wandering, attention restraint and WMC were related to negatively and positively valenced, but not neutral, mind wandering. Attention restraint was also related to intentional but not unintentional mind wandering. Tese results suggest that WMC and attention restraint predict some, but not all, types of mind wandering

    Advances in the neurocognition of music and language

    Get PDF

    Psychological Engagement in Choice and Judgment Under Risk and Uncertainty

    Get PDF
    Theories of choice and judgment assume that agents behave rationally, choose the higher expected value option, and evaluate the choice consistently (Expected Utility Theory, Von Neumann, & Morgenstern, 1947). However, researchers in decision-making showed that human behaviour is different in choice and judgement tasks (Slovic & Lichtenstein, 1968; 1971; 1973). In this research, we propose that psychological engagement and control deprivation predict behavioural inconsistencies and utilitarian performance with judgment and choice. Moreover, we explore the influences of engagement and control deprivation on agent’s behaviours, while manipulating content of utility (Kusev et al., 2011, Hertwig & Gigerenzer 1999, Tversky & Khaneman, 1996) and decision reward (Kusev et al, 2013, Shafir et al., 2002)
    corecore