822 research outputs found

    From AI with Love: Reading Big Data Poetry through Gilbert Simondon’s Theory of Transduction

    Get PDF
    Computation initiated a far-reaching re-imagination of language, not just as an information tool, but as a social, bio-physical activity in general. Modern lexicology provides an important overview of the ongoing development of textual documentation and its applications in relation to language and linguistics. At the same time, the evolution of lexical tools from the first dictionaries and graphs to algorithmically generated scatter plots of live online interaction patterns has been surprisingly swift. Modern communication and information studies from Norbert Weiner to the present-day support direct parallels between coding and linguistic systems. However, most theories of computation as a model of language use remain highly indefinite and open-ended, at times purposefully ambiguous. Comparing the use of computation and semantic technologies ranging from Christopher Strachey’s early love letter templates to David Jhave Johnson’s more recent experiments with artificial neural networks (ANNs), this paper proposes the philosopher Gilbert Simondon’s theory of transduction and metastable systems as a suitable framework for understanding various ontological and epistemological ramifications in our increasingly complex and intimate interactions with machine learning. Such developments are especially clear, I argue, in the poetic reimagining of language as a space of cybernetic hybridity. initiated a far-reaching re-imagination of language, not just as an information tool, but as a social, bio-physical activity in general. Modern lexicology provides an important overview of the ongoing development of textual documentation and its applications in relation to language and linguistics. At the same time, the evolution of lexical tools from the first dictionaries and graphs to algorithmically generated scatter plots of live online interaction patterns has been surprisingly swift. Modern communication and information studies from Norbert Weiner to the present-day support direct parallels between coding and linguistic systems. However, most theories of computation as a model of language use remain highly indefinite and open-ended, at times purposefully ambiguous. Comparing the use of computation and semantic technologies ranging from Christopher Strachey’s early love letter templates to David Jhave Johnson’s more recent experiments with artificial neural networks (ANNs), this paper proposes the philosopher Gilbert Simondon’s theory of transduction and metastable systems as a suitable framework for understanding various ontological and epistemological ramifications in our increasingly complex and intimate interactions with machine learning. Such developments are especially clear, I argue, in the poetic reimagining of language as a space of cybernetic hybridity. Keywords: Artificial Intelligence, Poetics, Neural Networks, Information Studies, Computation, Linguistics, Transduction, Philosoph

    Several types of types in programming languages

    Get PDF
    Types are an important part of any modern programming language, but we often forget that the concept of type we understand nowadays is not the same it was perceived in the sixties. Moreover, we conflate the concept of "type" in programming languages with the concept of the same name in mathematical logic, an identification that is only the result of the convergence of two different paths, which started apart with different aims. The paper will present several remarks (some historical, some of more conceptual character) on the subject, as a basis for a further investigation. The thesis we will argue is that there are three different characters at play in programming languages, all of them now called types: the technical concept used in language design to guide implementation; the general abstraction mechanism used as a modelling tool; the classifying tool inherited from mathematical logic. We will suggest three possible dates ad quem for their presence in the programming language literature, suggesting that the emergence of the concept of type in computer science is relatively independent from the logical tradition, until the Curry-Howard isomorphism will make an explicit bridge between them.Comment: History and Philosophy of Computing, HAPOC 2015. To appear in LNC

    The Impact of Alan Turing: Formal Methods and Beyond

    Get PDF
    © 2019, Springer Nature Switzerland AG. In this paper, we discuss the influence and reputation of Alan Turing since his death in 1954, specifically in the field of formal methods, especially for program proving, but also in a much wider context. Although he received some recognition during his lifetime, this image was tarnished by the controversy at the time of his death. While he was known and appreciated in scientific circles, he did not enter the public’s consciousness for several decades. A turning point was the definitive biography produced by Andrew Hodges in 1983 but, even then, the tide did not turn very rapidly. More recent events, such as the celebrations of his birth centenary in 2012 and the official British royal pardon in 2013, have raised Turing’s fame and popularity among the informed general public in the United Kingdom and elsewhere. Cultural works in the arts featuring Turing have enhanced his profile still further. Thus, the paper discusses not only Turing’s scientific impact, especially for formal methods, but in addition his historical, cultural, and even political significance. Turing’s academic ‘family tree’ in terms of heritage and legacy is also covered

    Introduction to the Literature on Semantics

    Get PDF
    An introduction to the literature on semantics. Included are pointers to the literature on axiomatic semantics, denotational semantics, operational semantics, and type theory

    Principled software microengineering

    Get PDF

    On Identifying Points of Semantic Shift Across Domains

    Full text link
    The semantics used for particular terms in an academic field organically evolve over time. Tracking this evolution through inspection of published literature has either been from the perspective of Linguistic scholars or has concentrated the focus of term evolution within a single domain of study. In this paper, we performed a case study to identify semantic evolution across different domains and identify examples of inter-domain semantic shifts. We initially used keywords as the basis of our search and executed an iterative process of following citations to find the initial mention of the concepts in the field. We found that a select set of keywords like ``semaphore'', ``polymorphism'', and ``ontology'' were mentioned within Computer Science literature and tracked the seminal study that borrowed those terms from original fields by citations. We marked these events as semantic evolution points. Through this manual investigation method, we can identify term evolution across different academic fields. This study reports our initial findings that will seed future automated and computational methods of incorporating concepts from additional academic fields.Comment: In 17th International Conference on Metadata and Semantics Research, October 202

    Semantic Domains and Denotational Semantics

    Get PDF
    The theory of domains was established in order to have appropriate spaces on which to define semantic functions for the denotational approach to programming-language semantics. There were two needs: first, there had to be spaces of several different types available to mirror both the type distinctions in the languages and also to allow for different kinds of semantical constructs - especially in dealing with languages with side effects; and second, the theory had to account for computability properties of functions - if the theory was going to be realistic. The first need is complicated by the fact that types can be both compound (or made up from other types) and recursive (or self-referential), and that a high-level language of types and a suitable semantics of types is required to explain what is going on. The second need is complicated by these complications of the semantical definitions and the fact that it has to be checked that the level of abstraction reached still allows a precise definition of computability

    Looking behind the text-to-be-seen: Analysing Twitter bots as electronic literature

    Get PDF
    This thesis focuses on showing how Twitter bots can be analysed from the viewpoint of electronic literature (e-lit) and how the analysis differs from evaluating other works of e-lit. Although formal research on electronic literature goes back some decades, there is still not much research discussing bots in particular. By examining historical and contemporary textual generators, seminal theories on reading and writing e-lit and botmakers’ practical notes about their craft, this study attempts to build an understanding of the process of creating a bot and the essential characteristics related to different kinds of bots. What makes the analysis of bots different from other textual generators is that the source code, which many theorists consider key in understanding works of e-lit, is rarely available for reading. This thesis proposes an alternative method for analysing bots, a framework for reverse-engineering the bot’s text generation procedures. By comparing the bot’s updates with one another, it is possible to notice the formulas and words repeated by the bot in order to better understand the authorial choices made in its design. The framework takes into account the special characteristics of different kinds of bots, focusing on grammar-based bots, which utilise fill-in-the-blank-type sentence structures to generate texts, and list-based bots, which methodically progress through large databases. From a survey of contemporary bots and earlier works of electronic and procedural literature, it becomes evident that understanding programming code is not essential for either analysing or creating bots: it is more important to understand the mechanisms of combinatory text generation and the author’s role in writing and curating the materials used. Bots and text generators also often raise questions of authorship. However, a review of their creation process makes it clear that human creativity is essential for the production of computer-generated texts. With bots, the writing of texts turns into a second-order creation, the writing of word lists, templates and rules, to generate the text-to-be-seen, the output for the reader to encounter.Tämä opinnäytetyö keskittyy esittelemään, kuinka Twitter-botteja on mahdollista analysoida elektronisen kirjallisuuden näkökulmasta ja kuinka niiden analyysi poikkeaa muiden elektronisen kirjallisuuden teosten tutkimisesta. Vaikka elektronisen kirjallisuuden tutkimusta on tehty joitain vuosikymmeniä, ei erityisesti botteihin keskittyvää tutkimusta ole juurikaan tuotettu. Tässä tutkimuksessa analysoidaan historiallisia ja nykyaikaisia tekstigeneraattoreita, elektronisen kirjallisuuden tutkijoiden teorioita teosten lukemisesta ja luomisesta sekä botin tekijöiden käytännön huomioita bottien kirjoittamisesta, joiden pohjalta luodaan kuva botin luomisprosessista ja erilaisiin botteihin liittyvistä ominaispiirteistä. Bottien lähdekoodi on harvoin vapaasti luettavissa, minkä vuoksi bottien analysointi eroaa merkittävästi muiden tekstigeneraattoreiden tutkimuksesta. Monet teoreetikot pitävät lähdekoodin lukemista olennaisena tapana analysoida elektronisen kirjallisuuden teoksia. Tämä opinnäytetyö esittelee vaihtoehtoisen tavan analysoida botteja. Botin tuottamien päivitysten vertailu keskenään auttaa näkemään botin lähdekoodissa käytetyt toistuvat kaavat sekä ymmärtämään tarkemmin botin tekstin tuottavia menetelmiä ja niihin liittyviä taiteellisia valintoja. Esitelty metodi ottaa huomioon erityyppisten bottien ominaispiirteet, keskittyen kaavapohjaisiin botteihin, jotka asettelevat yksittäisiä sanoja valmiisiin lausepohjiin, ja listapohjaisiin botteihin, jotka käyvät järjestelmällisesti läpi suuria tietokantoja. Tutkimuksessa läpikäytyjen vanhempien elektronisen ja proseduraalisen kirjallisuuden teosten ja nykyaikaisten bottien analyysin pohjalta voidaan päätellä, ettei bottien analysoiminen tai tekeminen vaadi ohjelmakoodin ymmärtämistä: on tärkeämpää, että botin lukija/tekijä ymmärtää prosessipohjaisen tekstitaiteen lainalaisuuksia sekä tekijän valintojen merkityksen käytettyjen materiaalien kirjoittamisessa ja kuratoinnissa. Botit ja tekstigeneraattorit kyseenalaistavat usein myös tekijyyden käsitteen. Niiden luomisprosessien analyysi osoittaa kuitenkin kiistattomasti, että tietokoneavusteinen tekstintuottaminen vaatii ihmisen luovuutta suunnitteluvaiheessa. Bottien tekemisessä kirjoittaminen vaihtuu toisen asteen luomiseksi, sanalistojen, lausepohjien ja sääntöjen kirjoittamiseksi, joiden pohjalta botti tuottaa lukijalle näytettävät tekstit, joita kuvataan otsikon termillä “the text-to-be-seen”
    corecore