The field of deep generative modeling has grown rapidly and consistently over
the years. With the availability of massive amounts of training data coupled
with advances in scalable unsupervised learning paradigms, recent large-scale
generative models show tremendous promise in synthesizing high-resolution
images and text, as well as structured data such as videos and molecules.
However, we argue that current large-scale generative AI models do not
sufficiently address several fundamental issues that hinder their widespread
adoption across domains. In this work, we aim to identify key unresolved
challenges in modern generative AI paradigms that should be tackled to further
enhance their capabilities, versatility, and reliability. By identifying these
challenges, we aim to provide researchers with valuable insights for exploring
fruitful research directions, thereby fostering the development of more robust
and accessible generative AI solutions