16 research outputs found

    Speaker discriminability for visual speech modes

    No full text
    Does speech mode affect recognizing people from their visual speech? We examined 3D motion data from 4 talkers saying 10 sentences (twice). Speech was in noise, in quiet or whispered. Principal Component Analyses (PCAs) were conducted and speaker classification was determined by Linear Discriminant Analysis (LDA). The first five PCs for the rigid motion and the first 10 PCs each for the non-rigid motion and the combined motion were input to a series of LDAs for all possible combinations of PCs that could be constructed using the retained PCs. The discriminant functions and classification coefficients were determined on the training data to predict the talker of the test data. Classification performance for both the in-noise and whispered speech modes were superior to the in-quiet one. Superiority of classification was found even if only the first PC (jaw motion) was used, i.e., measures of jaw motion when speaking in noise or whispering hold promise for bimodal person recognition or verification

    Comparison of native and non-native discrimination performance in the /##Ca/ context.

    No full text
    <p>% correct discrimination is presented on the y-axis, while contrast type is presented on the x-axis. Error bars indicate standard error of the mean. Note, however, that as the statistics applied are non-parametric, they are not necessarily related to the dispersion indicated in the error bars.</p

    Comparison of native and non-native discrimination performance in the /aCa/ context.

    No full text
    <p>% correct discrimination is presented on the y-axis, while contrast type is presented on the x-axis. Error bars indicate standard error of the mean. Note, however, that as the statistics applied are non-parametric, they are not necessarily related to the dispersion indicated in the error bars.</p

    English discrimination accuracy depending on the first presented consonant for /t ʈ/, / ʈ/ and /t / in both the /aCa/ and /##Ca/ contexts.

    No full text
    <p>English discrimination accuracy depending on the first presented consonant for /t ʈ/, / ʈ/ and /t / in both the /aCa/ and /##Ca/ contexts.</p

    Non-native discrimination scores per contrast type in the /aCa/ and /##Ca/ contexts.

    No full text
    <p>% correct discrimination is presented on the y-axis, while contrast type is presented on the x-axis. Error bars indicate standard error of the mean. Note, however, that as the statistics applied are non-parametric, they are not necessarily related to the dispersion indicated in the error bars.</p

    Discrimination accuracy for English listeners.

    No full text
    <p>Error bars indicate standard error of the mean. Note, however, that as the statistics applied are non-parametric, they are not necessarily related to the dispersion indicated in the error bars.</p

    The phonemic consonant inventory of Wubuy, adapted from [11].

    No full text
    <p>The phonemic consonant inventory of Wubuy, adapted from [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0142054#pone.0142054.ref011" target="_blank">11</a>].</p
    corecore