90,197 research outputs found

    Does training with amplitude modulated tones affect tone-vocoded speech perception?

    Get PDF
    Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored

    On perceptual expertise

    Get PDF
    Expertise is a cognitive achievement that clearly involves experience and learning, and often requires explicit, time-consuming training specific to the relevant domain. It is also intuitive that this kind of achievement is, in a rich sense, genuinely perceptual. Many experts—be they radiologists, bird watchers, or fingerprint examiners—are better perceivers in the domain(s) of their expertise. The goal of this paper is to motivate three related claims, by substantial appeal to recent empirical research on perceptual expertise: Perceptual expertise is genuinely perceptual and genuinely cognitive, and this phenomenon reveals how we can become epistemically better perceivers. These claims are defended against sceptical opponents that deny significant top-down or cognitive effects on perception, and opponents who maintain that any such effects on perception are epistemically pernicious

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure

    Developmental disorders

    Get PDF
    Introduction: Connectionist models have recently provided a concrete computational platform from which to explore how different initial constraints in the cognitive system can interact with an environment to generate the behaviors we find in normal development (Elman et al., 1996; Mareschal & Thomas, 2000). In this sense, networks embody several principles inherent to Piagetian theory, the major developmental theory of the twentieth century. By extension, these models provide the opportunity to explore how shifts in these initial constraints (or boundary conditions) can result in the emergence of the abnormal behaviors we find in atypical development. Although this field is very new, connectionist models have already been put forward to explain disordered language development in Specific Language Impairment (Hoeffner & McClelland, 1993), Williams Syndrome (Thomas & Karmiloff-Smith, 1999), and developmental dyslexia (Seidenberg and colleagues, see e.g. Harm & Seidenberg, in press); to explain unusual characteristics of perceptual discrimination in autism (Cohen, 1994; Gustafsson, 1997); and to explore the emergence of disordered cortical feature maps using a neurobiologically constrained model (Oliver, Johnson, Karmiloff-Smith, & Pennington, in press). In this entry, we will examine the types of initial constraints that connectionist modelers typically build in to their models, and how variations in these constraints have been proposed as possible accounts of the causes of particular developmental disorders. In particular, we will examine the claim that these constraints are candidates for what will constitute innate knowledge. First, however, we need to consider a current debate concerning whether developmental disorders are a useful tool to explore the (possibly innate) structure of the normal cognitive system. We will find that connectionist approaches are much more consistent with one side of this debate than the other

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise
    • …
    corecore