1 research outputs found

    Autoencoder Implementations in the predictive coding framework

    Get PDF
    Abstract. We study the implementation and functionality of autoencoders based on the predictive coding model and the free energy framework, which have seen relatively little experimentation. This framework offers an alternative approach to constructing artificial neural networks in place of traditional backpropagation networks. The limited number of studies published on the subject indicate that the framework could provide better solutions to applications employing artificial intelligence. This work is meant to accessible to any university student wishing to gain a preliminary understanding for the concepts involved. To this end we provide a detailed walkthrough of the core mathematical ideas behind the implementation using Bogacz’s great tutorial as a guide. We document the implementation process of two autoencoders that learn to recreate handwritten digits from the MNIST dataset in an unsupervised learning scenario. Both of these implementations utilize fully connected layers and are tasked with encoding and decoding of handwritten digits from the MNIST dataset. We analyze graphs of the different variable values and compare the final images produced by the autoencoder to the original ones. The first implementation is an attempt at constructing an original network and serves as an example of how error sensitive the construction of these networks from the ground up can be. We study the applicability of the theory of predictive coding in practice and diagnose the issues that we encounter. In particular, we showcase problems relating to the update of variances within the network and general difficulties in achieving convergence for all nodes in the network. The second implementation is built on top of a predictive coding library built by B. Millidge and A. Tschantz and showcases the potential of predictive coding model as a basis for a functional autoencoder. We partially replicate the results obtained by Millidge to establish a baseline for the network’s performance. Furthermore, we study the effects of tuning different aspects of these networks to better understand the function of these types of networks. These aspects include the network depth, number of nodes per layer and activation functions. Subjective evaluation on the effects of these modifications is conducted. Our findings regarding the second implementation indicate that the most important factor in determining final image quality and classification capability is the width of the code layer of the autoencoder. Our experiments using different activation functions do not reveal significant performance gains for any of the functions used. Lastly, we look at the effects of deepening the network but find equal or worse performance when compared to shallow networks
    corecore