Audio diarization is the process of partitioning an input audio stream into homogeneous regions according to their specific audio sources. These sources can include audio type (speech, music, background noise, ect.), speaker identity and channel characteristics. With the continually increasing number of larges volumes of spoken documents including broadcasts, voice mails, meetings and telephone conversations, diarization has received a great deal of interest in recent years which significantly impacts performances of automatic speech recognition and audio indexing systems. A subtype of audio diarization, where the speech segments of the signal are broken into different speakers, is speaker diarization. It generally answers to the question "Who spoke when?" and it is divided in two modules: speaker segmentation and speaker clustering. This chapter discusses the problem of automatically detecting speaker change points presented in a given audio stream, without prior acoustic information on the speakers. We introduce a new unsupervised speaker segmentation technique based on One Class Support Vector Machines (1-SVMs) robust to different acoustic conditions. We evaluated the robustness improvements of this method by segmenting different types of audio stream (broadcast news, meetings and telephone conversations) and comparing the results with model selection segmentation techniques based on the Bayesian information criterion (BIC)