21 research outputs found
Recommended from our members
Combining numeric and symbolic learning techniques
Incremental learning from examples in a noisy domain is a difficult problem in Machine Learning. In this paper we divide the task into two subproblems and present a combination of numeric and symbolic approaches that yields robust learning of boolean characterizations. Our method has been implemented in a computer program, and we plot its empirical learning performance in the presence of varying amounts of noise
Recommended from our members
Learning salience amoung [sic] features through contingency in the CEL framework
Determining which features in an environment are salient given a task, salience assignment, is a central problem in Machine Learning. A related phenomenon, contingency (the conditions under which relative salience among environmental features is acquired), is central to learning and memory in animal psychology. This paper presents an analysis of a set of empirical data on contingency and an algorithm for the salience assignment problem. The algorithm presented is implemented in a working computer program which interacts with a simulated environment to produce contingent associative learning corresponding to relevant behavioral data. The model also makes specific empirical predictions that can be experimentally tested
Recommended from our members
Learning Salience Anmong Featured Through Contingency in the CEL Framework
Determining which features in an environment are salient given a task, salience assignment, is a central problem in machine learning. A related phenomenon, contingency ( the conditions under which relative salience among environemental features is acquired), is central to learning and memory in animal psychology. This paper presents an analysis of a set of empirical data on contingency and an algorithm for the salience assignment problem. The algorithm presented is implmented in a working computer profram which interacts with a simulated environement to produce contingent asssociative learning corresponding to relevant behavioral data. The model also makes specific empirical predictions that can be experimentally tested
Recommended from our members
Contingency and latency in associative learning : computational, algorithmic and implementation analyses
Contingency (the learned relative salience of environmental features) and latency (the learned timing of response to stimuli) are central phenomena of learning and memory. This paper provides a computational analysis of, and algorithms for, a set of empirical data on contingency and latency in classical and instrumental conditioning. These analyses are presented within the framework of an information-processing architecture that describes a set of modules which operate in parallel and asynchronously to store, retrieve and modify experiential information. The architecture (called 'CEL', for 'Components of Experiential Learning') provides a way of making explicit the interactions among a number of otherwise separate algorithms for related phenomena. The modules comprising the architecture each emerge from the operation of an indexed network memory. The algorithms presented are also implemented in working computer programs that interact with a simulated environment to produce contingent associative learning and differential response latencies that correspond to the relevant behavioral data. The model makes a number of specific empirical predictions that can be experimentally tested
Recommended from our members
Concept acquisition through representational adjustment
Though the experiences of life exhibit unceasing variety, people are able to find constancy and deal with their world in a regular and predictable manner. This thesis promotes the hypothesis that the necessary abstractions can be learned. The specific task studied is inducing a concept description from examples. A model is presented. that relies on a weighted, symbolic description of concepts. Though the description is distributed, novel examples are classified holistically by combining each portion's contribution. Each new example also refines the concept description: internal weights are updated and new symbolic structures are introduced. These actions improve description quality as measured by classification accuracy over novel examples.Initially the concept description is highly distributed, being composed of many simple components. As learning progresses, sophisticated descriptive structures are added, and eventually the description is coalesced into a few highly predictive components. This qualitative change in the representation of the concept is a unique feature of the model.The model extends previous work by allowing for noisy examples, unknown values, and concept change over time. To bolster claims of robustness, several experiments illustrating the model's behavior are reported. Key results illustrate that the model should scale-up to larger tasks than those studied and have a number of potential applications
Recommended from our members
A note on correlational measures
Determining the degree to which two events are interrelated is a common subtask for artificial intelligence systems, especially learning systems. This note examines four correlational measures which allow quantization of the relationships between events. Despite the fact that the measures have diverse motivations and formulations, they all indicate irrelevance precisely at the point of statistical independence
Using Learned Dependencies to Automatically Construct Sufficient and Sensible Editing Views
Databases sometimes have keys besides those pre-planned by the database designers. These are easy to discover given functional dependencies in the data. These superfluous keys are convenient if a user wishes to add data to a projection of the database. A key can be chosen that minimizes the attributes the user must edit. In a list format view, enough attribute columns are added to those specified by the user to ensure that a key is present. In a form view, enough extra text boxes are added. In this latter view, functional dependencies may also be used to visualize the dependencies between attributes by placing independent attributes above dependent ones. This paper briefly reviews an algorithm for inducing functional dependencies, and then it demonstrates methods for finding keys, constructing list views, and laying out form views. 1. Introduction The relational database model requires a key to uniquely identify entries in the database. Typically, during database design, one or more a..
Recommended from our members
Learning from indifferent Agents
In many situations, learners have the opportunity to observe other agents solving problems similar to their own. While not as favorable as learning from fully explained solutions, this has advantages over solving problems in isolation. In this paper we describe the general situation of learning from indifferent agents and outline a preliminary theory of how it may afford improved performance. Because one of our long-term goals is to improve educational methods, we identify a domain that allows us to observe humans learning from indifferent agents, and we summarize verbal protocol evidence indicating when and how humans learn