285 research outputs found

    The Edge of the Imaginary World: The Influences of Imperialism and Expansionism in Secondary World Cartography

    Get PDF
    This paper explores the function of cartographical representations in fantasy literature and their implications in a cartographic tradition marked by Imperialism and Colonialism. Using the Tolkienian terminology of ā€œsecondary worldsā€, I analyzed the features of genre-setting maps such as Middle Earth and Narnia, noting the function of frontiers within these representations of imaginary realms. Compared with the maps of early European exploration in the Americas and Africa, the Eurocentric tendencies of the two works of fantasy literature reveal themselves even in a component of world-building as fundamental as map-making. Deviations from these traditional representations of frontier-lands in Ursula Le Guinā€™s Earthsea series work to display in contrast how map-making influences the way other-ness in humanity operates within the work itself. I then explore the function of frontiers in the context of secondary world expansion, placed into the context of transmedia, and exemplified through the work of George R.R.Martinā€™s A Game of Thrones franchise. I mark the expansion of Martinā€™s maps over the course of the seriesā€™ existence and expansion through various outlets of media, which reveal a pattern of exploiting cartographical frontiers. I argue that while contemporary secondary worlds may have moved past traditions of the genre more closely associated with imperialism and Eurocentrism (especially concerning populations), new traditions of growth suggest an embracing of transmedia expansionism, not without its own parallels with the process of colonialization. Beyond the ethical implications of such parallels, this secondary world expansionism further emphasizes the subtle power of frontiers in fantasy map-making

    The Variational Homoencoder: Learning to learn high capacity generative models from few examples

    Full text link
    Hierarchical Bayesian methods can unify many related tasks (e.g. k-shot classification, conditional and unconditional generation) as inference within a single generative model. However, when this generative model is expressed as a powerful neural network such as a PixelCNN, we show that existing learning techniques typically fail to effectively use latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class. This technique, which we call a Variational Homoencoder (VHE), produces a hierarchical latent variable model which better utilises latent variables. We use the VHE framework to learn a hierarchical PixelCNN on the Omniglot dataset, which outperforms all existing models on test set likelihood and achieves strong performance on one-shot generation and classification tasks. We additionally validate the VHE on natural images from the YouTube Faces database. Finally, we develop extensions of the model that apply to richer dataset structures such as factorial and hierarchical categories.Comment: UAI 2018 oral presentatio

    Compositional Program Generation for Systematic Generalization

    Full text link
    Compositional generalization is a key ability of humans that enables us to learn new concepts from only a handful examples. Machine learning models, including the now ubiquitous transformers, struggle to generalize in this way, and typically require thousands of examples of a concept during training in order to generalize meaningfully. This difference in ability between humans and artificial neural architectures, motivates this study on a neuro-symbolic architecture called the Compositional Program Generator (CPG). CPG has three key features: modularity, type abstraction, and recursive composition, that enable it to generalize both systematically to new concepts in a few-shot manner, as well as productively by length on various sequence-to-sequence language tasks. For each input, CPG uses a grammar of the input domain and a parser to generate a type hierarchy in which each grammar rule is assigned its own unique semantic module, a probabilistic copy or substitution program. Instances with the same hierarchy are processed with the same composed program, while those with different hierarchies may be processed with different programs. CPG learns parameters for the semantic modules and is able to learn the semantics for new types incrementally. Given a context-free grammar of the input language and a dictionary mapping each word in the source language to its interpretation in the output language, CPG can achieve perfect generalization on the SCAN and COGS benchmarks, in both standard and extreme few-shot settings.Comment: 7 pages of text with 1 page of reference
    • ā€¦
    corecore