All YIN No YANG: Geometric Abstraction of Oil Paintings with Trained Models, Noise and Self-reference

Abstract

The rapid development of Diffusion models and the declarative nature of interfaces developed for the public require automation methods, where media production can harness natural language as a mode of representation but not necessarily of interaction with humans. This article describes an image-to-video Diffusion system which removes practitioners from the process of defining prompts when producing images with conditional reference, documenting a set of results with a custom dataset of oil paintings. Our research focuses on the appropriation of trained model ensembles that are coordinated to produce indefinite sets of frames with occasional human intervention utilising timeline-based architectures. The proposed system automates a CLIP-guided DDPM with a supplementary depth estimation model and through a set of compositing techniques we found that results with coincidental and diverging descriptions can be useful for moving-image element composition. Our experiments focus on the representation of human figure and its morphological transformation

Similar works

Full text

thumbnail-image

UAL Research Online

redirect

This paper was published in UAL Research Online.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: cc_by_nc_nd