32 research outputs found

    On Introspection, Metacognitive Control and Augmented Data Mining Live Cycles

    Full text link
    We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models.Comment: 10 pages, 3 figure

    Emotional System with Consciousness and Behavior Using Dopamine

    Get PDF

    From here to human-level AI

    Get PDF
    AbstractHuman-level AI will be achieved, but new ideas are almost certainly needed, so a date cannot be reliably predicted—maybe five years, maybe five hundred years. I'd be inclined to bet on this 21st century.It is not surprising that human-level AI has proved difficult and progress has been slow—though there has been important progress. The slowness and the demand to exploit what has been discovered has led many to mistakenly redefine AI, sometimes in ways that preclude human-level AI—by relegating to humans parts of the task that human-level computer programs would have to do. In the terminology of this paper, it amounts to settling for a bounded informatic situation instead of the more general common sense informatic situation.Overcoming the “brittleness” of present AI systems and reaching human-level AI requires programs that deal with the common sense informatic situation—in which the phenomena to be taken into account in achieving a goal are not fixed in advance.We discuss reaching human-level AI, emphasizing logical AI and especially emphasizing representation problems of information and of reasoning. Ideas for reasoning in the common sense informatic situation include nonmonotonic reasoning, approximate concepts, formalized contexts and introspection

    The role of attention in robot self-awareness

    Full text link
    A robot may not be truly self-aware even though it can have some characteristics of self-awareness, such as having emotional states or the ability to recognize itself in the mirror. We define self-awareness in robots to be characterized by the capacity to direct attention toward their own mental state. This paper explores robot self-awareness and the role that attention plays in the achievement self-awareness. We propose a new attention based approach to self-awareness called ASMO and conduct a comparative analysis of approaches that highlights the innovation and benefits of ASMO. We then describe how our attention based self-awareness can be designed and used to develop self-awareness in state-of-the-art humanoidal robots. © 2009 IEEE

    Good Old-Fashioned Artificial Consciousness and the Intermediate Level Fallacy

    Get PDF
    Recently, there has been considerable interest and effort to the possibility to design and implement conscious robots, i.e., the chance that robots may have subjective experiences. Typical approaches as the global workspace, information integration, enaction, cognitive mechanisms, embodiment, i.e., the Good Old-Fashioned Artificial Consciousness, henceforth, GOFAC, share the same conceptual framework. In this paper, we discuss GOFAC's basic tenets and their implication for AI and Robotics. In particular, we point out the intermediate level fallacy as the central issue affecting GOFAC. Finally, we outline a possible alternative conceptual framework toward robot consciousness

    Tracking reliability and helpfulness in agent interactions

    Get PDF
    A critical aspect of open systems such as the Internet is the interactions amongst the component agents of the system. Often this interaction is organised around social principles, in that one agent may request the help of another, and in turn may make a commitment to assist another when requested. In this paper we investigate two measures of the social responsibility of an agent known as reliability and helpfulness. Intuitively, reliability measures how good an agent is at keeping its commitments, and helpfulness measures how willing an agent is to make a commitment, when requested for help. We discuss these notions in the context of FIPA protocols. It is important to note that these measures are dependent only on the messages exchanged between the agents, and do not make any assumptions about the internal organisation of the agents. This means that these measures are both applicable to any variety of software agent, and externally verifiable, i.e. able to be calculated by anyone with access to the messages exchanged

    Logical Reduction of Metarules

    Get PDF
    International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times
    corecore