14,090 research outputs found

    Levels of abstraction and the Turing test

    Get PDF
    An important lesson that philosophy can learn from the Turing Test and computer science more generally concerns the careful use of the method of Levels of Abstraction (LoA). In this paper, the method is first briefly summarised. The constituents of the method are “observables”, collected together and moderated by predicates restraining their “behaviour”. The resulting collection of sets of observables is called a “gradient of abstractions” and it formalises the minimum consistency conditions that the chosen abstractions must satisfy. Two useful kinds of gradient of abstraction – disjoint and nested – are identified. It is then argued that in any discrete (as distinct from analogue) domain of discourse, a complex phenomenon may be explicated in terms of simple approximations organised together in a gradient of abstractions. Thus, the method replaces, for discrete disciplines, the differential and integral calculus, which form the basis for understanding the complex analogue phenomena of science and engineering. The result formalises an approach that is rather common in computer science but has hitherto found little application in philosophy. So the philosophical value of the method is demonstrated by showing how making the LoA of discourse explicit can be fruitful for phenomenological and conceptual analysis. To this end, the method is applied to the Turing Test, the concept of agenthood, the definition of emergence, the notion of artificial life, quantum observation and decidable observation. It is hoped that this treatment will promote the use of the method in certain areas of the humanities and especially in philosophy

    Turing's three philosophical lessons and the philosophy of information

    Get PDF
    In this article, I outline the three main philosophical lessons that we may learn from Turing's work, and how they lead to a new philosophy of information. After a brief introduction, I discuss his work on the method of levels of abstraction (LoA), and his insistence that questions could be meaningfully asked only by specifying the correct LoA. I then look at his second lesson, about the sort of philosophical questions that seem to be most pressing today. Finally, I focus on the third lesson, concerning the new philosophical anthropology that owes so much to Turing's work. I then show how the lessons are learned by the philosophy of information. In the conclusion, I draw a general synthesis of the points made, in view of the development of the philosophy of information itself as a continuation of Turing's work. This journal is © 2012 The Royal Society.Peer reviewe

    Levellism and the method of abstraction

    Get PDF
    The use of "levels of abstraction" in philosophical analysis (levellism) has recently come under attack. In this paper, we argue that a refined version of epistemological levellism should be retained as a fundamental method, which we call the method of abstraction. After a brief introduction, in section two we make clear the nature and applicability of the (epistemological) method of levels of abstraction. In section three, we show the fruitfulness of the new method by applying it to five case studies: the concept of agenthood, the Turing test, the definition of emergence, quantum observation and decidable observation. In section four, we further characterise and support the method by distinguishing it from three other forms of "levellism": (i) levels of organisation; (ii) levels of explanation and (iii) conceptual schemes. In this context, we also briefly address the problems of relativism and antirealism. In the conclusion, we indicate some of the work that lies ahead, two potential limitations of the method and some results that have already been obtained by applying the method to some long-standing philosophical problems

    A systematic approach for component-based software development

    Get PDF
    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This approach consists of splitting the software development process according to four abstraction levels, viz., enterprise, system, component and object, and three different views, viz., structural, behavioural and interactional. The use of different abstraction levels and views allows a better control of the development process

    Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence

    Get PDF
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possi-bility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Tu-ring test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify con-cepts as well as participate in social practices

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to Knowledge

    Get PDF
    In this paper we expose the theoretical background underlying our current research. This consists in the development of behaviour-based knowledge systems, for closing the gaps between behaviour-based and knowledge-based systems, and also between the understandings of the phenomena they model. We expose the requirements and stages for developing behaviour-based knowledge systems and discuss their limits. We believe that these are necessary conditions for the development of higher order cognitive capacities, in artificial and natural cognitive systems
    • …
    corecore