Story visualization aims to generate a series of images that match the story
described in texts, and it requires the generated images to satisfy high
quality, alignment with the text description, and consistency in character
identities. Given the complexity of story visualization, existing methods
drastically simplify the problem by considering only a few specific characters
and scenarios, or requiring the users to provide per-image control conditions
such as sketches. However, these simplifications render these methods
incompetent for real applications. To this end, we propose an automated story
visualization system that can effectively generate diverse, high-quality, and
consistent sets of story images, with minimal human interactions. Specifically,
we utilize the comprehension and planning capabilities of large language models
for layout planning, and then leverage large-scale text-to-image models to
generate sophisticated story images based on the layout. We empirically find
that sparse control conditions, such as bounding boxes, are suitable for layout
planning, while dense control conditions, e.g., sketches and keypoints, are
suitable for generating high-quality image content. To obtain the best of both
worlds, we devise a dense condition generation module to transform simple
bounding box layouts into sketch or keypoint control conditions for final image
generation, which not only improves the image quality but also allows easy and
intuitive user interactions. In addition, we propose a simple yet effective
method to generate multi-view consistent character images, eliminating the
reliance on human labor to collect or draw character images.Comment: 19 page