Generalizing a Neuropsychological Model of Visual Categorization to Auditory . . .

Abstract

This article reports the results of an auditory vowel categorization experiment in which listeners classified 54 synthetic vowel stimuli that varied along the F2 and F3 dimensions into one of three vowel categories, /I/, /U/, and //. A successful, neuropsychologically plausible model of categorization in the visual domain, the SPC, was generalized to the auditory domain and was applied separately to each listener's data from the auditory vowel categorization task. A version of the SPC that assumed two striatal units per category, and thus piecewise linear response region partitions, provided a good description of the data, accounting on average for 97% of the variance in the data. This finding is important because it suggests that a model with a reasonable neurobiological architecture can be applied in both the visual and auditory domains. This provides an important step toward bridging the gap between visual and auditory categorization and toward a neurobiological understanding of the systems involved in these two different, but related, forms of categorization. A version of logistic regression that assumed nonlinear response region partitions (NAPP-NLLR) provided a better account of the data than a version that assumed linear partitions (NAPP-LLR). The linear versions of the SPC and NAPP provided approximately equal accounts of the data, although there was a slight but consistent advantage for the NAPP-LLR model. The nonlinear version of logistic regression (NAPP-NLLR) on the other hand provide a larger and consistent performance improvement in AIC fit over the piecewise linear version of the SPC. Despite the large AIC difference, the predictive power of the models was approximately equal. Specifically, we computed the absolute value of the deviation between predicted..

    Similar works

    Full text

    thumbnail-image

    Available Versions