Foley sound synthesis refers to the creation of authentic, diegetic sound
effects for media, such as film or radio. In this study, we construct a neural
Foley synthesizer capable of generating mono-audio clips across seven
predefined categories. Our approach introduces multiple enhancements to
existing models in the text-to-audio domain, with the goal of enriching the
diversity and acoustic characteristics of the generated foleys. Notably, we
utilize a pre-trained encoder that retains acoustical and musical attributes in
intermediate embeddings, implement class-conditioning to enhance
differentiability among foley classes in their intermediate representations,
and devise an innovative transformer-based architecture for optimizing
self-attention computations on very large inputs without compromising valuable
information. Subsequent to implementation, we present intermediate outcomes
that surpass the baseline, discuss practical challenges encountered in
achieving optimal results, and outline potential pathways for further research