19 research outputs found

    Amplifying The Uncanny

    Get PDF
    Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations

    Autoencoding Video Frames

    Get PDF
    This report details the implementation of an autoencoder trained with a learned similarity metric - one that is capable of modelling a complex dis- tribution of natural images - training it on frames from selected films, and using it to reconstruct video sequences by passing each frame through the autoencoder and re-sequencing the output frames in-order. This is primarily an artistic exploration of the representational capacity of the current state of the art in generative models and is a novel application of autoencoders. This model is trained on, and used to reconstruct the films Blade Runner and A Scanner Darkly, producing new artworks in their own right. Experiments passing other videos through these models is carried out, demonstrating the potential of this method to become a new technique in the production of experimental image and video

    Automating Generative Deep Learning for Artistic Purposes: Challenges and Opportunities

    Full text link
    We present a framework for automating generative deep learning with a specific focus on artistic applications. The framework provides opportunities to hand over creative responsibilities to a generative system as targets for automation. For the definition of targets, we adopt core concepts from automated machine learning and an analysis of generative deep learning pipelines, both in standard and artistic settings. To motivate the framework, we argue that automation aligns well with the goal of increasing the creative responsibility of a generative system, a central theme in computational creativity research. We understand automation as the challenge of granting a generative system more creative autonomy, by framing the interaction between the user and the system as a co-creative process. The development of the framework is informed by our analysis of the relationship between automation and creative autonomy. An illustrative example shows how the framework can give inspiration and guidance in the process of handing over creative responsibility

    Envisioning Distant Worlds: Fine-Tuning a Latent Diffusion Model with NASA's Exoplanet Data

    Full text link
    There are some 5,500 confirmed Exoplanets beyond our solar system. Though we know these planets exist, most of them are too far away for us to know what they look like. In this paper, we develop an algorithm and a model to translate any given exoplanet’s numeric data into a text prompt that can be input into a trained latent diffusion model to generate a predictive visualization of that exoplanet. This paper describes a novel approach of translating numeric data to textual descriptors formulated from prior accepted astrophysical research. These textual descriptions are paired with photographs and artistic visualisations from NASA’s public archives to build a training set for a latent diffusion model, which can produce new visualizations of unseen distant worlds. Workshop on Machine Learning for Creativity and Design NeurIPS 2023 Workshop

    Searching for an (un)stable equilibrium: experiments in training generative models without data

    Full text link
    This paper details a developing artistic practice around an ongoing series of works called (un)stable equilibrium. These works are the product of using modern machine toolkits to train generative models without data, an approach akin to traditional generative art where dynamical systems are explored intuitively for their latent generative possibilities. We discuss some of the guiding principles that have been learnt in the process of experimentation, present details of the implementation of the first series of works and discuss possibilities for future experimentation

    Visualising Topological Structures of Activation in Artificial Neural Networks

    Full text link
    This project report describes a first approach to creating a visualisation of an artificial neural network, that visualises the topology of the network given an individual data input that the network has learned to recognise. A survey of previous attempts to visualise both artificial and biological neural networks is presented, as well as a survey of various techniques used in other forms of network visualisation that could be applied to visualising artificial neural networks. This is followed by a detailed description of the method implemented in this project, followed by results from the visualisatio

    Light Field Completion Using Focal Stack Propagation

    Get PDF
    Both light field photography and focal stack photography are rapidly becoming more accessible with Lytro’s commercial light field cameras and the ever increasing processing power of mobile devices. Light field photography offers the ability of post capturing perspective changes and digital refocusing, but little is available in the way of post-production editing of light field images. We present a first approach for interactive content aware completion of light fields and focal stacks, allowing for the removal of foreground or background elements from a scene

    Autoencoding Blade Runner: Reconstructing Films With Artificial Neural Networks

    Get PDF
    ‘Blade Runner—Autoencoded’ is a film made by training an autoencoder—a type of generative neural network—to recreate frames from the film Blade Runner. The autoencoder is made to reinterpret every individual frame, reconstructing it based on its memory of the film. The result is a hazy, dreamlike version of the original film. The project explores the aesthetic qualities of the disembodied gaze of the neural network. The autoencoder is also capable of representing images from films it has not seen based on what it has learned from watching Blade Runner
    corecore