Real-time singing synthesis using a parallel processing system

Abstract

Existing singing synthesis systems are usually monophonic and implemented using single processor systems. A polyphonic (multi-voice), formant-based synthesiser has been designed which addresses the needs of real-time performance. Singing synthesis, as opposed to speech synthesis, is highly dependent on the control of score parameters such as event onset times and fundamental frequency. The novel system described in this paper is designed to allow control of such parameters using the MIDI protocol and a graphical user interface. The hardware consists of a PC, ADSP-21060 SHARC processors and a Xilinx FPGA chip. This paper presents an introduction to the system, a description of the model used and the method of implementation on a parallel system. The results show processing requirements for polyphonic sound synthesis using the synthesis mode

Similar works

This paper was published in University of Huddersfield Repository.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.