66,692 research outputs found

    Representational task formats and problem solving strategies in kinematics and work

    Get PDF
    Previous studies have reported that students employed different problem solving approaches when presented with the same task structured with different representations. In this study, we explored and compared students’ strategies as they attempted tasks from two topical areas, kinematics and work. Our participants were 19 engineering students taking a calculus-based physics course. The tasks were presented in linguistic, graphical, and symbolic forms and requested either a qualitative solution or a value. The analysis was both qualitative and quantitative in nature focusing principally on the characteristics of the strategies employed as well as the underlying reasoning for their applications. A comparison was also made for the same student’s approach with the same kind of representation across the two topics. Additionally, the participants’ overall strategies across the different tasks, in each topic, were considered. On the whole, we found that the students prefer manipulating equations irrespective of the representational format of the task. They rarely recognized the applicability of a ‘‘qualitative’’ approach to solve the problem although they were aware of the concepts involved. Even when the students included visual representations in their solutions, they seldom used these representations in conjunction with the mathematical part of the problem. Additionally, the students were not consistent in their approach for interpreting and solving problems with the same kind of representation across the two topical areas. The representational format, level of prior knowledge, and familiarity with a topic appeared to influence their strategies, their written responses, and their ability to recognize qualitative ways to attempt a problem. The nature of the solution does not seem to impact the strategies employed to handle the problem

    Logic-Based Specification Languages for Intelligent Software Agents

    Full text link
    The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chie

    Representational task formats and problem solving strategies in kinematics and work

    Get PDF
    Previous studies have reported that students employed different problem solving approaches when presented with the same task structured with different representations. In this study, we explored and compared students’ strategies as they attempted tasks from two topical areas, kinematics and work. Our participants were 19 engineering students taking a calculus-based physics course. The tasks were presented in linguistic, graphical, and symbolic forms and requested either a qualitative solution or a value. The analysis was both qualitative and quantitative in nature focusing principally on the characteristics of the strategies employed as well as the underlying reasoning for their applications. A comparison was also made for the same student’s approach with the same kind of representation across the two topics. Additionally, the participants’ overall strategies across the different tasks, in each topic, were considered. On the whole, we found that the students prefer manipulating equations irrespective of the representational format of the task. They rarely recognized the applicability of a ‘‘qualitative’’ approach to solve the problem although they were aware of the concepts involved. Even when the students included visual representations in their solutions, they seldom used these representations in conjunction with the mathematical part of the problem. Additionally, the students were not consistent in their approach for interpreting and solving problems with the same kind of representation across the two topical areas. The representational format, level of prior knowledge, and familiarity with a topic appeared to influence their strategies, their written responses, and their ability to recognize qualitative ways to attempt a problem. The nature of the solution does not seem to impact the strategies employed to handle the problem

    Cognitive context and arguments from ontologies for learning

    Get PDF
    The deployment of learning resources on the web by different experts has resulted in the accessibility of multiple viewpoints about the same topics. In this work we assume that learning resources are underpinned by ontologies. Different formalizations of domains may result from different contexts, different use of terminology, incomplete knowledge or conflicting knowledge. We define the notion of cognitive learning context which describes the cognitive context of an agent who refers to multiple and possibly inconsistent ontologies to determine the truth of a proposition. In particular we describe the cognitive states of ambiguity and inconsistency resulting from incomplete and conflicting ontologies respectively. Conflicts between ontologies can be identified through the derivation of conflicting arguments about a particular point of view. Arguments can be used to detect inconsistencies between ontologies. They can also be used in a dialogue between a human learner and a software tutor in order to enable the learner to justify her views and detect inconsistencies between her beliefs and the tutor’s own. Two types of arguments are discussed, namely: arguments inferred directly from taxonomic relations between concepts, and arguments about the necessary an

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom

    Progression and Verification of Situation Calculus Agents with Bounded Beliefs

    Get PDF
    We investigate agents that have incomplete information and make decisions based on their beliefs expressed as situation calculus bounded action theories. Such theories have an infinite object domain, but the number of objects that belong to fluents at each time point is bounded by a given constant. Recently, it has been shown that verifying temporal properties over such theories is decidable. We take a first-person view and use the theory to capture what the agent believes about the domain of interest and the actions affecting it. In this paper, we study verification of temporal properties over online executions. These are executions resulting from agents performing only actions that are feasible according to their beliefs. To do so, we first examine progression, which captures belief state update resulting from actions in the situation calculus. We show that, for bounded action theories, progression, and hence belief states, can always be represented as a bounded first-order logic theory. Then, based on this result, we prove decidability of temporal verification over online executions for bounded action theories. © 2015 The Author(s
    • 

    corecore