124 research outputs found
Recommended from our members
Pre-Proceedings of the Cognitive Computation Symposium: Thinking Beyond Deep Learning (CoCoSym 2018) : Extended Abstracts/Speakers' Positions
Reformulation in planning
Reformulation of a problem is intended to make the problem more amenable to efficient solution. This is equally true in the special case of reformulating a planning problem. This paper considers various ways in which reformulation can be exploited in planning
Recommended from our members
The effect of multiple knowledge sources on learning and teaching
Current paradigms for machine-based learning and teaching tend to perform their task in isolation from a rich context of existing knowledge. In contrast, the research project presented here takes the view that bringing multiple sources of knowledge to bear is of central importance to learning in complex domains. As a consequence teaching must both take advantage of and beware of interactions between new and existing knowledge. The central process which connects learning to its context is reasoning by analogy, a primary concern of this research. In teaching, the connection is provided by the explicit use of a learning model to reason about the choice of teaching actions. In this learning paradigm, new concepts are incrementally refined and integrated into a body of expertise, rather than being evaluated against a static notion of correctness. The domain chosen for this experimentation is that of learning to solve "algebra story problems." A model of acquiring problem solving skills in this domain is described, including: representational structures for background knowledge, a problem solving architecture, learning mechanisms, and the role of analogies in applying existing problem solving abilities to novel problems. Examples of learning are given for representative instances of algebra story problems. After relating our views to the psychological literature, we outline the design of a teaching system. Finally, we insist on the interdependence of learning and teaching and on the synergistic effects of conducting both research efforts in parallel
A heuristic-based approach to code-smell detection
Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together ā data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
Preferential and Preferential-discriminative Consequence relations
The present paper investigates consequence relations that are both
non-monotonic and paraconsistent. More precisely, we put the focus on
preferential consequence relations, i.e. those relations that can be defined by
a binary preference relation on states labelled by valuations. We worked with a
general notion of valuation that covers e.g. the classical valuations as well
as certain kinds of many-valued valuations. In the many-valued cases,
preferential consequence relations are paraconsistant (in addition to be
non-monotonic), i.e. they are capable of drawing reasonable conclusions which
contain contradictions. The first purpose of this paper is to provide in our
general framework syntactic characterizations of several families of
preferential relations. The second and main purpose is to provide, again in our
general framework, characterizations of several families of preferential
discriminative consequence relations. They are defined exactly as the plain
version, but any conclusion such that its negation is also a conclusion is
rejected (these relations bring something new essentially in the many-valued
cases).Comment: team Logic and Complexity, written in 2004-200
Papers for Task Force Meeting on Future and Impacts of Artificial Intelligence, 15-17 August 1983
IIASA's Clearinghouse activity is oriented towards issues of interest among our National Member Organizations. Here, in the forefront, are the issues concerning the promise and impact of science and technology on society and economy in general, and some selected branches in particular.
Artificial Intelligence (AI) is one of the most promising research areas. There are many indications that the long predicted upswing of this discipline is finally in the making. A recent survey had Nobel-laureates predict that the most influence in the next century will be made by computers, AI, and robotics. Already, at present, "expert" systems are emerging and applied; natural language understanding systems developed; AI principles are used in robots, flexible automation, computer aided-design, etc. All this will have an, as yet, unspecified social and economic impact on the activity of human beings, both at work and leisure.
It certainly takes interdisciplinary and cross-culturally based studies to enhance the understanding of this complex phenomenon. This is the aim of our endeavors in the field which is in excess of our duty to pass useful knowledge to our constituency. We think that IIASA, cooperating in this respect with the Austrian Society for Cybernetic Studies (ASCS), can develop some comparative advantage here.
This publication contains papers written by leading personalities, both East and West, in the field of artificial intelligence on the future and impact of this emerging discipline. We hope that the meeting, where the papers will be discussed, will not only identify important areas where the impact of artificial intelligence will be felt most directly, but also find the most rewarding issues for further research
A methodology for evaluating intelligent tutoring systems
DissertationThis dissertation proposes a generic methodology for evaluating intelligent tutoring systems (ITSs),
and applies it to the evaluation of the SQL-Tutor, an ITS for the database language SQL.
An examination of the historical development, theory and architecture of intelligent tutoring
systems, as well as the theory, architecture and behaviour of the SQL-Tutor sets the context for this
study. The characteristics and criteria for evaluating computer-aided instruction (CAl) systems are
considered as a background to an in-depth investigation of the characteristics and criteria
appropriate for evaluating ITSs. These criteria are categorised along internal and external
dimensions with the internal dimension focusing on the intrinsic features and behavioural aspects
of ITSs, and the external dimension focusing on its educational impact. Several issues surrounding
the evaluation of ITSs namely, approaches, methods, techniques and principles are examined, and
integrated within a framework for assessing the added value of ITS technology for instructional
purposes.Educational StudiesM. Sc. (Information Systems
Low-Default Portfolio/One-Class Classification: A Literature Review
Consider a bank which wishes to decide whether a credit applicant will obtain credit or not. The bank has to assess if the applicant will be able to redeem the credit. This is done by estimating the probability that the applicant will default prior to the maturity of the credit. To estimate this probability of default it is first necessary to identify criteria which separate the good from the bad creditors, such as loan amount and age or factors concerning the income of the applicant. The question then arises of how a bank identifies a sufficient number of selective criteria that possess the necessary discriminatory power. As a solution, many traditional binary classification methods have been proposed with varying degrees of success. However, a particular problem with credit scoring is that defaults are only observed for a small subsample of applicants. An imbalance exists between the ratio of non-defaulters to defaulters. This has an adverse effect on the aforementioned binary classification method. Recently one-class classification approaches have been proposed to address the imbalance problem. The purpose of this literature review is three fold: (I) present the reader with an overview of credit scoring; (ii) review existing binary classification approaches; and (iii) introduce and examine one-class classification approaches
- ā¦