1,303 research outputs found

    The Source of Magic

    Get PDF
    This paper is an attempt to show that a large part of Western society no longer operates on the rationalist principles that most of us thought it did, but that it instead runs by magic more akin to that in fantasy works. The term ‘magic’ is not meant metaphorically or in science fiction author Arthur C Clarke’s sense that ‘Any sufficiently advanced technology is indistinguishable from magic’ (Clarke 1962), but is meant literally in the sense that Frazer (1890, republished 2003) used the term. This means that instead of trying to understand the present and near future by looking at the works of science fiction creators who put forth a rationalist and technological view of the world, we would understand the future better by looking to the fantasy of authors such as Jack Vance, Matthew Hughes, Ursula Le Guin, Piers Anthony and Michael Moorcock. This magic is manifested through magical thinking and irrational behaviour, where the majority of us use literal spells and incantations in our daily interactions with each other in the networked world, and where we worship capricious gods; most importantly, those spells, incantations and worship actually work, and those gods have actually come to exist. This paper will also show just how the spread of the computer technology propounded by scientists, technologists and SF writers has inevitably led to the creation of this irrational and magical world. This is partly because of limitations built-in to the formal systems on which these systems are based, leading to an extreme example of the law of unintended consequences. Finally, the paper will explain the mechanism by which magic is literally becoming real by reference to Frazer’s two laws of magic: the Law of Similarity and the Law of Contagion

    ON THE FOUNDATIONS OF COMPUTABILITY THEORY

    Get PDF
    The principal motivation for this work is the observation that there are significant deficiencies in the foundations of conventional computability theory. This thesis examines the problems with conventional computability theory, including its failure to address discrepancies between theory and practice in computer science, semantic confusion in terminology, and limitations in the scope of conventional computing models. In light of these difficulties, fundamental notions are re-examined and revised definitions of key concepts such as “computer,” “computable,” and “computing power” are provided. A detailed analysis is conducted to determine desirable semantics and scope of applicability of foundational notions. The credibility of the revised definitions is ascertained by demonstrating by their ability to address identified problems with conventional definitions. Their practical utility is established through application to examples. Other related issues, including hidden complexity in computations, subtleties related to encodings, and the cardinalities of sets involved in computing, are examined. A resource-based meta-model for characterizing computing model properties is introduced. The proposed definitions are presented as a starting point for an alternate foundation for computability theory. However, formulation of the particular concepts under discussion is not the sole purpose of the thesis. The underlying objective of this research is to open discourse on alternate foundations of computability theory and to inspire re-examination of fundamental notions

    Connecting adaptive behaviour and expectations in models of innovation: The Potential Role of Artificial Neural Networks

    Get PDF
    In this methodological work I explore the possibility of explicitly modelling expectations conditioning the R&D decisions of firms. In order to isolate this problem from the controversies of cognitive science, I propose a black box strategy through the concept of “internal model”. The last part of the article uses artificial neural networks to model the expectations of firms in a model of industry dynamics based on Nelson & Winter (1982)

    The Limits of Econometrics: Nonparametric Estimation in Hilbert Spaces

    Get PDF

    The 2014 International Planning Competition: Progress and Trends

    Get PDF
    We review the 2014 International Planning Competition (IPC-2014), the eighth in a series of competitions starting in 1998. IPC-2014 was held in three separate parts to assess state-of-the-art in three prominent areas of planning research: the deterministic (classical) part (IPCD), the learning part (IPCL), and the probabilistic part (IPPC). Each part evaluated planning systems in ways that pushed the edge of existing planner performance by introducing new challenges, novel tasks, or both. The competition surpassed again the number of competitors than its predecessor, highlighting the competition’s central role in shaping the landscape of ongoing developments in evaluating planning systems

    Fuzzy control in manufacturing systems

    Get PDF
    XIV+119hlm.;24c
    • 

    corecore