14 research outputs found

    Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence

    Get PDF
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about how—and even whether—various capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental concepts—a dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions

    Art and the Unknown

    Get PDF
    Abstract: The purpose of this essay is to explore the nature and role of art as a human phenomenon from a broadly cognitive perspective. Like science and religion, art serves to mediate the unknown, at once to embrace and to defend against the fundamental mystery of existence. Thus, it may challenge the status quo while generally serving to maintain it. Art tracks the individuation of subjectivity, serving the pleasure principle, yet is appropriated by the collective’s commitment to the reality principle. While science and religion close in on serious answers to fundamental questions, art opens up possibilities toward playfulness, uselessness, imagination, and arbitrary whim. Though art has no unifying definition, meaning, or intent through time and across cultures, it remains important to people, both to do and to enjoy. It serves to counterbalance the naive realism of science, the “rationality” of modern society, and the literalism of text-based religion. While its allegiance is divided, its most worthy intent is to aid us to confront and negotiate the great mystery revealed to us in consciousness

    The Problem of Consciousness

    Get PDF

    Walking in the Shoes of the Brain: an "agent" approach to phenomenality and the problem of consciousness

    Get PDF
    Abstract: Given an embodied evolutionary context, the (conscious) organism creates phenomenality and establishes a first-person point of view with its own agency, through intentional relations made by its own acts of fiat, in the same way that human observers create meaning in language

    Can Science Explain consciousness? Toward a solution to the 'hard problem'

    Get PDF
    For diverse reasons, the problem of phenomenal consciousness is persistently challenging. Mental terms are characteristically ambiguous, researchers have philosophical biases, secondary qualities are excluded from objective description, and philosophers love to argue. Adhering to a regime of efficient causes and third-person descriptions, science as it has been defined has no place for subjectivity or teleology. A solution to the “hard problem” of consciousness will require a radical approach: to take the point of view of the cognitive system itself. To facilitate this approach, a concept of agency is introduced along with a different understanding of intentionality. Following this approach reveals that the autopoietic cognitive system constructs phenomenality through acts of fiat, which underlie perceptual completion effects and “filling in”—and, by implication, phenomenology in general. It creates phenomenality much as we create meaning in language, through the use of symbols that it assigns meaning in the context of an embodied evolutionary history that is the source of valuation upon which meaning depends. Phenomenality is a virtual representation to itself by an executive agent (the conscious self) tasked with monitoring the state of the organism and its environment, planning future action, and coordinating various sub- agencies. Consciousness is not epiphenomenal, but serves a function for higher organisms that is distinct from that of unconscious processing. While a strictly scientific solution to the hard problem is not possible for a science that excludes the subjectivity it seeks to explain, there is hope to at least psychologically bridge the explanatory gulf between mind and matter, and perhaps hope for a broader definition of science

    The Found and the Made: A Precis

    Get PDF
    mathematics, Platonism, certainty, Kant, representation, determinism, natural law, prior probability, empiricism, reificatio

    The Problem of Cognitive Domains

    Get PDF
    The problem of cognitive domains is that one can conceive the territory only as it is portrayed in the map. It involves conflating the domain of representation with the domain of what it represents. This is a category mistake: there are essential qualitative and quantitative differences between map and territory. The output of cognitive processes, both perceptual and scientific, is recycled as the input

    A Refutation of the Simulation Argument

    Get PDF
    Critically examines Nick Bostrom's "Are You Living in a Simulation?" and underlying concepts

    What Is Intelligence in the Context of AGI?

    Get PDF
    Lack of coherence in concepts of intelligence has implications for artificial intelligence. ‘Intelligence’ is an abstraction grounded in human experience while supposedly freed from the embodiment that is the basis of that experience. In addition to physical instantiation, embodiment is a condition of dependency, of an autopoietic system upon an environment, which thus matters to the system itself. The autonomy and general capability sought in artificial general intelligence implies artificially re-creating the organism’s natural condition of embodiment. That may not be feasible; and, if feasible, it may not be controllable or advantageous

    The Value Alignment Problem

    Get PDF
    The Value Alignment Problem (VAP) presupposes that artificial general intelligence (AGI) is desirable and perhaps inevitable. As usually conceived, it is one side of the more general issue of mutual control between agonistic agents. To be fully autonomous, an AI must be an autopoietic system (an agent), with its own purposiveness. In the case of such systems, Bostrom’s orthogonality thesis is untrue. The VAP reflects the more general problem of interfering in complex systems, entraining the possibility of unforeseen consequences. Instead of consolidating skill in an agent that acts on its own behalf, it would be safer and as effective to create ad hoc task-oriented software tools. What motivates the quest to create a superintelligence with a will of its own? Is a general intelligence even possible that is not an agent? Such questions point to the need to clarify what general intelligence is, what constitutes an agent, whose values and intentionality are to be aligned, and how feasible “friendliness” is
    corecore