In this paper, we investigate acoustic features which differentiate the two speech registers neutral and intimate within different constellations of speakers and addressees. Three different types of speakers are considered: mothers addressing their own children or an unknown adult, women with no children addressing an imaginary child or an imaginary adult, and children addressing a pet robot using both intimate and neutral speech. We use a large, systematically generated feature vector, upsampling, and SVM and RF for learning. Results are reported for extensive test-runs facing speaker-independency and using PCA-SFFS vs. SVM-SFFS for feature ranking. Classification performance and most relevant feature types are discussed in detail. ©2008 IEEE
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.