262 research outputs found

    Content-based music structure analysis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Automatic lyric alignment in Cantonese popular music.

    Get PDF
    Wong Chi Hang.Thesis submitted in: October 2005.Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.Includes bibliographical references (leaves 89-94).Abstracts in English and Chinese.Abstract --- p.ii摘要 --- p.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 2 --- Literature Review --- p.5Chapter 2.1 --- LyricAlly --- p.5Chapter 2.2 --- Singing Voice Detection --- p.6Chapter 2.3 --- Singing Transcription System --- p.7Chapter 3 --- Background and System Overview --- p.9Chapter 3.1 --- Background --- p.9Chapter 3.1.1 --- Audio Mixing Practices of the popular music industry --- p.10Chapter 3.1.2 --- Cantonese lyric writer practice --- p.11Chapter 3.2 --- System Overview --- p.13Chapter 4 --- Vocal Signal Enhancement --- p.15Chapter 4.1 --- Method --- p.15Chapter 4.1.1 --- Non-center Signal Estimation --- p.16Chapter 4.1.2 --- Center Signal Estimation --- p.17Chapter 4.1.3 --- Bass and drum reduction --- p.21Chapter 4.2 --- Experimental Results --- p.21Chapter 4.2.1 --- Experimental Setup --- p.21Chapter 4.2.2 --- Results and Discussion --- p.24Chapter 5 --- Onset Detection --- p.29Chapter 5.1 --- Method --- p.29Chapter 5.1.1 --- Envelope Extraction --- p.30Chapter 5.1.2 --- Relative Difference Function --- p.32Chapter 5.1.3 --- Post-Processing --- p.32Chapter 5.2 --- Experimental Results --- p.34Chapter 5.2.1 --- Experimental Setup --- p.34Chapter 5.2.2 --- Results and Discussion --- p.35Chapter 6 --- Non-vocal Pruning --- p.39Chapter 6.1 --- Method --- p.39Chapter 6.1.1 --- Vocal Feature Selection --- p.39Chapter 6.1.2 --- Feed-forward neural network --- p.44Chapter 6.2 --- Experimental Results --- p.46Chapter 6.2.1 --- Experimental Setup --- p.46Chapter 6.2.2 --- Results and Discussion --- p.48Chapter 7 --- Lyric Feature Extraction --- p.51Chapter 7.1 --- Features --- p.52Chapter 7.1.1 --- Relative Pitch Feature --- p.52Chapter 7.1.2 --- Time Distance Feature --- p.54Chapter 7.2 --- Pitch Extraction --- p.56Chapter 7.2.1 --- f0 Detection Algorithms --- p.56Chapter 7.2.2 --- Post-Processing --- p.64Chapter 7.2.3 --- Experimental Results --- p.64Chapter 8 --- Lyrics Alignment --- p.69Chapter 8.1 --- Dynamic Time Warping --- p.69Chapter 8.2 --- Experimental Results --- p.72Chapter 8.2.1 --- Experimental Setup --- p.72Chapter 8.2.2 --- Results and Discussion --- p.74Chapter 9 --- Conclusion and Future Work --- p.82Chapter 9.1 --- Conclusion --- p.82Chapter 9.2 --- Future Work --- p.83Chapter A --- Publications --- p.85Chapter B --- Symbol Table --- p.86Bibliography --- p.8

    Fractal based speech recognition and synthesis

    Get PDF
    Transmitting a linguistic message is most often the primary purpose of speech com­munication and the recognition of this message by machine that would be most useful. This research consists of two major parts. The first part presents a novel and promis­ing approach for estimating the degree of recognition of speech phonemes and makes use of a new set of features based fractals. The main methods of computing the frac­tal dimension of speech signals are reviewed and a new speaker-independent speech recognition system developed at De Montfort University is described in detail. Fi­nally, a Least Square Method as well as a novel Neural Network algorithm is employed to derive the recognition performance of the speech data. The second part of this work studies the synthesis of speech words, which is based mainly on the fractal dimension to create natural sounding speech. The work shows that by careful use of the fractal dimension together with the phase of the speech signal to ensure consistent intonation contours, natural-sounding speech synthesis is achievable with word level speech. In order to extend the flexibility of this framework, we focused on the filtering and the compression of the phase to maintain and produce natural sounding speech. A ‘naturalness level’ is achieved as a result of the fractal characteristic used in the synthesis process. Finally, a novel speech synthesis system based on fractals developed at De Montfort University is discussed. Throughout our research simulation experiments were performed on continuous speech data available from the Texas Instrument Massachusetts institute of technology ( TIMIT) database, which is designed to provide the speech research community with a standarised corpus for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition system

    Singing Voice Recognition for Music Information Retrieval

    Get PDF
    This thesis proposes signal processing methods for analysis of singing voice audio signals, with the objectives of obtaining information about the identity and lyrics content of the singing. Two main topics are presented, singer identification in monophonic and polyphonic music, and lyrics transcription and alignment. The information automatically extracted from the singing voice is meant to be used for applications such as music classification, sorting and organizing music databases, music information retrieval, etc. For singer identification, the thesis introduces methods from general audio classification and specific methods for dealing with the presence of accompaniment. The emphasis is on singer identification in polyphonic audio, where the singing voice is present along with musical accompaniment. The presence of instruments is detrimental to voice identification performance, and eliminating the effect of instrumental accompaniment is an important aspect of the problem. The study of singer identification is centered around the degradation of classification performance in presence of instruments, and separation of the vocal line for improving performance. For the study, monophonic singing was mixed with instrumental accompaniment at different signal-to-noise (singing-to-accompaniment) ratios and the classification process was performed on the polyphonic mixture and on the vocal line separated from the polyphonic mixture. The method for classification including the step for separating the vocals is improving significantly the performance compared to classification of the polyphonic mixtures, but not close to the performance in classifying the monophonic singing itself. Nevertheless, the results show that classification of singing voices can be done robustly in polyphonic music when using source separation. In the problem of lyrics transcription, the thesis introduces the general speech recognition framework and various adjustments that can be done before applying the methods on singing voice. The variability of phonation in singing poses a significant challenge to the speech recognition approach. The thesis proposes using phoneme models trained on speech data and adapted to singing voice characteristics for the recognition of phonemes and words from a singing voice signal. Language models and adaptation techniques are an important aspect of the recognition process. There are two different ways of recognizing the phonemes in the audio: one is alignment, when the true transcription is known and the phonemes have to be located, other one is recognition, when both transcription and location of phonemes have to be found. The alignment is, obviously, a simplified form of the recognition task. Alignment of textual lyrics to music audio is performed by aligning the phonetic transcription of the lyrics with the vocal line separated from the polyphonic mixture, using a collection of commercial songs. The word recognition is tested for transcription of lyrics from monophonic singing. The performance of the proposed system for automatic alignment of lyrics and audio is sufficient for facilitating applications such as automatic karaoke annotation or song browsing. The word recognition accuracy of the lyrics transcription from singing is quite low, but it is shown to be useful in a query-by-singing application, for performing a textual search based on the words recognized from the query. When some key words in the query are recognized, the song can be reliably identified

    The development of speech coding and the first standard coder for public mobile telephony

    Get PDF
    This thesis describes in its core chapter (Chapter 4) the original algorithmic and design features of the ??rst coder for public mobile telephony, the GSM full-rate speech coder, as standardized in 1988. It has never been described in so much detail as presented here. The coder is put in a historical perspective by two preceding chapters on the history of speech production models and the development of speech coding techniques until the mid 1980s, respectively. In the epilogue a brief review is given of later developments in speech coding. The introductory Chapter 1 starts with some preliminaries. It is de- ??ned what speech coding is and the reader is introduced to speech coding standards and the standardization institutes which set them. Then, the attributes of a speech coder playing a role in standardization are explained. Subsequently, several applications of speech coders - including mobile telephony - will be discussed and the state of the art in speech coding will be illustrated on the basis of some worldwide recognized standards. Chapter 2 starts with a summary of the features of speech signals and their source, the human speech organ. Then, historical models of speech production which form the basis of di??erent kinds of modern speech coders are discussed. Starting with a review of ancient mechanical models, we will arrive at the electrical source-??lter model of the 1930s. Subsequently, the acoustic-tube models as they arose in the 1950s and 1960s are discussed. Finally the 1970s are reviewed which brought the discrete-time ??lter model on the basis of linear prediction. In a unique way the logical sequencing of these models is exposed, and the links are discussed. Whereas the historical models are discussed in a narrative style, the acoustic tube models and the linear prediction tech nique as applied to speech, are subject to more mathematical analysis in order to create a sound basis for the treatise of Chapter 4. This trend continues in Chapter 3, whenever instrumental in completing that basis. In Chapter 3 the reader is taken by the hand on a guided tour through time during which successive speech coding methods pass in review. In an original way special attention is paid to the evolutionary aspect. Speci??cally, for each newly proposed method it is discussed what it added to the known techniques of the time. After presenting the relevant predecessors starting with Pulse Code Modulation (PCM) and the early vocoders of the 1930s, we will arrive at Residual-Excited Linear Predictive (RELP) coders, Analysis-by-Synthesis systems and Regular- Pulse Excitation in 1984. The latter forms the basis of the GSM full-rate coder. In Chapter 4, which constitutes the core of this thesis, explicit forms of Multi-Pulse Excited (MPE) and Regular-Pulse Excited (RPE) analysis-by-synthesis coding systems are developed. Starting from current pulse-amplitude computation methods in 1984, which included solving sets of equations (typically of order 10-16) two hundred times a second, several explicit-form designs are considered by which solving sets of equations in real time is avoided. Then, the design of a speci??c explicitform RPE coder and an associated eÆcient architecture are described. The explicit forms and the resulting architectural features have never been published in so much detail as presented here. Implementation of such a codec enabled real-time operation on a state-of-the-art singlechip digital signal processor of the time. This coder, at a bit rate of 13 kbit/s, has been selected as the Full-Rate GSM standard in 1988. Its performance is recapitulated. Chapter 5 is an epilogue brie y reviewing the major developments in speech coding technology after 1988. Many speech coding standards have been set, for mobile telephony as well as for other applications, since then. The chapter is concluded by an outlook

    Caractérisation de l'environnement musical dans les documents audiovisuels

    Get PDF
    Currently, the amount of music available, notably via the Internet, is growing daily. The collections are too huge for a user to navigate into without help from a computer. Our work takes place in the general context of music indexation. In order to detail the context of our work, we present a brief overview of the work currently made in music indexation for indexation : instrument recognition, tonality and tempo estimation, genre and mood classification, singer identification, melody, score, chord and lyrics transcription. For each of these subjects, we insist on the definition of the problem and of technical terms, and on the more imporants problems encountered. In a second part, we present au method we developped to automatically distinguish between monophonic and polyphonic sounds. For this task, we developped two new parameters, based on the analysis of a confidence indicator. The modeling of these parameters is made with Weibull bivariate distributions. We studied the problem of the estimation of the parameters of this distribution, and suggested an original method derived from the moment method. A full set of experiment allow us to compare our system with classical method, and to validate each step of our approach. In the third part, we present a singing voice detector, in monophonic and polyphonic context. This method is base on the detection of vibrato. This parameter is derived from the analysis of the fundamental frequency, so it is a priori defined for monophonic sounds. Using two segmentations, we extend this concept to polyphonic sound, and present a new parameter : the extended vibrato. Our system's performances are comparable with those of state-of-the-art methods. Using the monophonic / polyphonic distinction as a pre-processing allow us to adapt our singing voice detector to each context. This leads to an improvment of the results. After giving some reflexions on the use of music for automatic description, annotating and indexing of audiovisual documents, we present the contribution of each tool we presented to music indexation, and to audiovisual documents indexation using music, and finally give some perspectives.Actuellement, la quantité de musique disponible, notamment via Internet, va tous les jours croissant. Les collections sont trop gigantesques pour qu'il soit possible d'y naviguer ou d'y rechercher un extrait sans l'aide d'outils informatiques. Notre travail se place dans le cadre général de l'indexation automatique de la musique. Afin de situer le contexte de travail, nous proposons tout d'abord une brève revue des travaux réalisés actuellement pour la description automatique de la musique à des fins d'indexation : reconnaissance d'instruments, détermination de la tonalité, du tempo, classification en genre et en émotion, identification du chanteur, transcriptions de la mélodie, de la partition, de la suite d'accords et des paroles. Pour chacun de ces sujets, nous nous attachons à définir le problème, les termes techniques propres au domaine, et nous nous attardons plus particulièrement sur les problèmes les plus saillants. Dans une seconde partie, nous décrivons le premier outil que nous avons développé : une distinction automatique entre les sons monophoniques et les sons polyphoniques. Nous avons proposé deux nouveaux paramètres, basés sur l'analyse d'un indice de confiance. La modélisation de la répartition bivariée de ces paramètre est réalisée par des distributions de Weibull bivariées. Le problème de l'estimation des paramètres de cette distribution nous a conduit à proposer une méthode originale d'estimation dérivée de l'analyse des moments de la loi. Une série d'expériences nous permet de comparer notre système à des approches classiques, et de valider toutes les étapes de notre méthode. Dans la troisième partie, nous proposons une méthode de détection du chant, accompagné ou non. Cette méthode se base sur la détection du vibrato, un paramètre défini à partir de l'analyse de la fréquence fondamentale, et défini a priori pour les sons monophoniques. A l'aide de deux segmentations, nous étendons ce concept aux sons polyphoniques, en introduisant un nouveau paramètre : le vibrato étendu. Les performances de cette méthode sont comparables à celles de l'état de l'art. La prise en compte du pré-traitement monophonique / polyphonique nous a amenés à adapter notre méthode de détection du chant à chacun de ces contextes. Les résultats s'en trouvent améliorés. Après une réflexion sur l'utilisation de la musique pour la description, l'annotation et l'indexation automatique des documents audiovisuels, nous nous posons la question de l'apport de chacun des outils décrits dans cette thèse au problème de l'indexation de la musique, et de l'indexation des documents audiovisuels par la musique et offrons quelques perspectives

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Acoustic measurement of overall voice quality in sustained vowels and continuous speech

    Get PDF
    Measurement of dysphonia severity involves auditory-perceptual evaluations and acoustic analyses of sound waves. Meta-analysis of proportional associations between these two methods showed that many popular perturbation metrics and noise-to-harmonics and others ratios do not yield reasonable results. However, this meta-analysis demonstrated that the validity of specific autocorrelation- and cepstrum-based measures was much more convincing, and appointed ‘smoothed cepstral peak prominence’ as the most promising metric of dysphonia severity. Original research confirmed this inferiority of perturbation measures and superiority of cepstral indices in dysphonia measurement of laryngeal-vocal and tracheoesophageal voice samples. However, to be truly representative for daily voice use patterns, measurement of overall voice quality is ideally founded on the analysis of sustained vowels ánd continuous speech. A customized method for including both sample types and calculating the multivariate Acoustic Voice Quality Index (i.e., AVQI), was constructed for this purpose. Original study of the AVQI revealed acceptable results in terms of initial concurrent validity, diagnostic precision, internal and external cross-validity and responsiveness to change. It thus was concluded that the AVQI can track changes in dysphonia severity across the voice therapy process. There are many freely and commercially available computer programs and systems for acoustic metrics of dysphonia severity. We investigated agreements and differences between two commonly available programs (i.e., Praat and Multi-Dimensional Voice Program) and systems. The results indicated that clinicians better not compare frequency perturbation data across systems and programs and amplitude perturbation data across systems. Finally, acoustic information can also be utilized as a biofeedback modality during voice exercises. Based on a systematic literature review, it was cautiously concluded that acoustic biofeedback can be a valuable tool in the treatment of phonatory disorders. When applied with caution, acoustic algorithms (particularly cepstrum-based measures and AVQI) have merited a special role in assessment and/or treatment of dysphonia severity
    corecore