95,243 research outputs found

    Case based reasoning as a model for cognitive artificial intelligence.

    Get PDF
    Cognitive Systems understand the world through learning and experience. Case Based Reasoning (CBR) systems naturally capture knowledge as experiences in memory and they are able to learn new experiences to retain in their memory. CBR's retrieve and reuse reasoning is also knowledge-rich because of its nearest neighbour retrieval and analogy-based adaptation of retrieved solutions. CBR is particularly suited to domains where there is no well-defined theory, because they have a memory of experiences of what happened, rather than why/how it happened. CBR's assumption that 'similar problems have similar solutions' enables it to understand the contexts for its experiences and the 'bigger picture' from clusters of cases, but also where its similarity assumption is challenged. Here we explore cognition and meta-cognition for CBR through self-refl ection and introspection of both memory and retrieve and reuse reasoning. Our idea is to embed and exploit cognitive functionality such as insight, intuition and curiosity within CBR to drive robust, and even explainable, intelligence that will achieve problemsolving in challenging, complex, dynamic domains

    REKAYASA SISTEM KOGNITIF BERBASIS MULTI-AGEN: PENDEKATAN PENALARAN BERBASIS KASUS

    Get PDF
    Cognitive system modeling first introduced by psychology researchers. Unfortunately, the model has not been sufficient in supporting computer based problem solving. For that reason, artificial intelligence tries to propose a computational model of cognitive system. The main purpose of the computational model is to support human in solving complex problems, especially problems that involve large number of data, uncompleted data, and problem solving that requires systematic approach as human does. This research proposes an engineering of such multiagent based cognitive system, which employs case based reasoning as imitation of human reasoning to maintain the knowledge base

    Knowledge transfer in cognitive systems theory: models, computation, and explanation

    Get PDF
    Knowledge transfer in cognitive systems can be explicated in terms of structure mapping and control. The structure of an effective model enables adaptive control for the system's intended domain of application. Knowledge is transferred by a system when control of a new domain is enabled by mapping the structure of a previously effective model. I advocate for a model-based view of computation which recognizes effective structure mapping at a low level. Artificial neural network systems are furthermore viewed as model-based, where effective models are learned through feedback. Thus, many of the most popular artificial neural network systems are best understood in light of the cybernetic tradition as error-controlled regulators. Knowledge transfer with pre-trained networks (transfer learning) can, when automated like other machine learning methods, be seen as an advancement towards artificial general intelligence. I argue this is convincing because it is akin to automating a general systems methodology of knowledge transfer in scientific reasoning. Analogical reasoning is typical in such a methodology, and some accounts view analogical cognition as the core of cognition which provides adaptive benefits through efficient knowledge transfer. I then discuss one modern example of analogical reasoning in physics, and how an extended Bayesian view might model confirmation given a structural mapping between two systems. In light of my account of knowledge transfer, I finally assess the case of quantum-like models in cognition, and whether the transfer of quantum principles is appropriate. I conclude by throwing my support behind a general systems philosophy of science framework which emphasizes the importance of structure, and which rejects a controversial view of scientific explanation in favor of a view of explanation as enabling control

    IJCAI-ECAI Workshop “Interactions between Analogical Reasoning and Machine Learning” (IARML 2022)

    Get PDF
    International audienceAnalogical reasoning is a remarkable capability of human reasoning, used to solve hard reasoning tasks. It consists in transferring knowledge from a source domain to a different, but somewhat similar, target domain by relying simultaneously on similarities and dissimilarities. In particular, analogical proportions, i.e., statements of the form “A is to B as C is to D", are the basis of analogical inference. Analogical reasoning is pertaining to case-based reasoning and it has contributed to multiple machine learning tasks such as classification, decision making, and automatic translation with competitive results. Moreover, analogical extrapolation can support dataset augmentation (analogical extension) for model learning,especially in environments with few labeled examples. Conversely, advanced neural techniques, such as representation learning, enabled efficient approaches to detecting and solving analogies in domains where symbolic approaches had shown their limits. However, recent approaches using deep learning architectures remain task and domain specific, and strongly rely on ad-hoc representations of objects, i.e., tailor made embeddings.The first workshop Interactions between Analogical Reasoning and Machine Learning (IARML) was hosted by the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022). It brought together AI researchers at the cross roads of machine learning, cognitive sciences and knowledge representation and reasoning, who are interested by the various applications of analogical reasoning in machine learning or, conversely, of machine learning techniques to improve analogical reasoning. The IARML workshop aims to bridge gaps between different AI communities, including case-based reasoning, deep learning and neuro-symbolic machine learning. The workshop welcomed submissions of research papers on all topics at the intersection of analogical reasoning and machine learning. The submissions were subjected to a strict double-blind reviewing process that resulted in the selection of six original contributions and two invited talks, in addition to the two plenary keynote talks

    Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems

    Get PDF
    In this paper I will present an analysis of the impact that the notion of “bounded rationality”, introduced by Herbert Simon in his book “Administrative Behavior”, produced in the field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated Decision Making (ADM), I will show how the introduction of the cognitive dimension into the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the development of a line of research aiming at the realisation of artificial systems whose decisions are based on the adoption of powerful shortcut strategies (known as heuristics) based on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the “heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial complexity in real-life situations and still represents an important strategy for the design and implementation of intelligent systems

    The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments

    Get PDF
    In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and future challenges

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research
    • …
    corecore