2 research outputs found
DDSP-Piano: A Neural Sound Synthesizer Informed by Instrument Knowledge
Instrument sound synthesis using deep neural networks has received numerous improvements over the last couple of years. Among them, the Differentiable Digital Signal Processing (DDSP) framework has modernized the spectral modeling paradigm by including signal-based synthesizers and effects into fully differentiable architectures. The present work extends the applications of DDSP to the task of polyphonic sound synthesis, with the proposal of a differentiable piano synthesizer conditioned on MIDI inputs. The model architecture is motivated by high-level acoustic modeling knowledge of the instrument, which, along with the sound structure priors inherent to the DDSP components, makes for a lightweight, interpretable, and realistic-sounding piano model. A subjective listening test has revealed that the proposed approach achieves better sound quality than a state-of-the-art neural-based piano synthesizer, but physical-modeling-based models still hold the best quality. Leveraging its interpretability and modularity, a qualitative analysis of the model behavior was also conducted: it highlights where additional modeling knowledge and optimization procedures could be inserted in order to improve the synthesis quality and the manipulation of sound properties. Eventually, the proposed differentiable synthesizer can be further used with other deep learning models for alternative musical tasks handling polyphonic audio and symbolic data
AI (r)evolution -- where are we heading? Thoughts about the future of music and sound technologies in the era of deep learning
Artificial Intelligence (AI) technologies such as deep learning are evolving
very quickly bringing many changes to our everyday lives. To explore the future
impact and potential of AI in the field of music and sound technologies a
doctoral day was held between Queen Mary University of London (QMUL, UK) and
Sciences et Technologies de la Musique et du Son (STMS, France). Prompt
questions about current trends in AI and music were generated by academics from
QMUL and STMS. Students from the two institutions then debated these questions.
This report presents a summary of the student debates on the topics of: Data,
Impact, and the Environment; Responsible Innovation and Creative Practice;
Creativity and Bias; and From Tools to the Singularity. The students represent
the future generation of AI and music researchers. The academics represent the
incumbent establishment. The student debates reported here capture visions,
dreams, concerns, uncertainties, and contentious issues for the future of AI
and music as the establishment is rightfully challenged by the next generation