Existing singing synthesis systems are usually monophonic and implemented using single processor systems. A polyphonic (multi-voice), formant-based synthesiser has been designed which addresses the needs of real-time performance. Singing synthesis, as opposed to speech synthesis, is highly dependent on the control of score parameters such as event onset times and fundamental frequency. The novel system described in this paper is designed to allow control of such parameters using the MIDI protocol and a graphical user interface. The hardware consists of a PC, ADSP-21060 SHARC processors and a Xilinx FPGA chip. This paper presents an introduction to the system, a description of the model used and the method of implementation on a parallel system. The results show processing requirements for polyphonic sound synthesis using the synthesis mode
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.