The development of AI-driven generative audio mirrors broader AI trends,
often prioritizing immediate accessibility at the expense of explainability.
Consequently, integrating such tools into sustained artistic practice remains a
significant challenge. In this paper, we explore several paths to improve
explainability, drawing primarily from our research-creation practice in
training and implementing generative audio models. As practical provisions for
improved explainability, we highlight human agency over training materials, the
viability of small-scale datasets, the facilitation of the iterative creative
process, and the integration of interactive machine learning as a mapping tool.
Importantly, these steps aim to enhance human agency over generative AI systems
not only during model inference, but also when curating and preprocessing
training data as well as during the training phase of models.Comment: In Proceedings of Explainable AI for the Arts Workshop 2024 (XAIxArts
2024) arXiv:2406.1448