15,022 research outputs found
Recommended from our members
Comparing instance-averaging with instance-saving learning algorithms
The goal of our research is to understand the power and appropriateness of instance-based representations and their associated acquisition methods. This paper concerns two methods for reducing storage requirements for instance-based learning algorithms. The first method, termed instance-saving, represents concept descriptions by selecting and storing a representative subset of the given training instances. We provide an analysis for instance-saving techniques and specify one general class of concepts that instance-saving algorithms are capable of learning. The second method, termed instance-averaging, represents concept descriptions by averaging together some training instances while simply saving others. We describe why analyses for instance-averaging algorithms are difficult to produce. Our empirical results indicate that storage requirements for these two methods are roughly equivalent. We outline the assumptions of instance-averaging algorithms and describe how their violation might degrade performance. To mitigate the effects of non-convex concepts, a dynamic thresholding technique is introduced and applied in both the averaging and non-averaging learning algorithms. Thresholding increases the storage requirements but also increases the quality of the resulting concept descriptions
Characterization of image sets: the Galois Lattice approach
This paper presents a new method for supervised image
classification. One or several landmarks are attached to each class, with the intention of characterizing it and discriminating it from the other classes. The different features, deduced from image primitives, and their relationships with the sets of images are structured and organized into a hierarchy thanks to an original method relying on a mathematical formalism called Galois (or Concept) Lattices. Such lattices allow us to select features as landmarks of specific classes. This paper details the feature selection process and illustrates this through a robotic example in a structured environment. The class of any image is the room from which the image is shot by the robot camera. In the discussion, we compare this approach with decision trees and we give some issues for future research
Modal Similarity
Just as Boolean rules define Boolean categories, the Boolean operators define higher-order Boolean categories referred to as modal categories. We examine the similarity order between these categories and the standard category of logical identity (i.e. the modal category defined by the biconditional or equivalence operator). Our goal is 4-fold: first, to introduce a similarity measure for determining this similarity order; second, to show that such a measure is a good predictor of the similarity assessment behaviour observed in our experiment involving key modal categories; third, to argue that as far as the modal categories are concerned, configural similarity assessment may be componential or analytical in nature; and lastly, to draw attention to the intimate interplay that may exist between deductive judgments, similarity assessment and categorisation
Recommended from our members
Detecting and removing noisy instances from concept descriptions
Several published results show that instance-based learning algorithms record high classification accuracies and low storage requirements when applied to supervised learning tasks. However, these learning algorithms are highly sensitive to training set noise. This paper describes a simple extension of instance-based learning algorithms for detecting and removing noisy instances from concept descriptions. The extension requires evidence that saved instances be significantly good classifiers before it allows them to be used for subsequent classification tasks. We show that this extension's performance degrades more slowly in the presence of noise, improves classification accuracies, and further reduces storage requirements in several artificial and real-world databases
Recommended from our members
The influence of prior knowledge on concept acquisition : experimental and computational results
The influence of the prior causal knowledge of subjects on the rate of learning, the categories formed, and the attributes attended to during learning is explored. Conjunctive concepts are thought to be easier for subjects to learn than disjunctive concepts. Conditions are reported under which the opposite occurs. In particular, it is demonstrated that prior knowledge can influence the rate of concept learning and that the influence of prior causal knowledge can dominate the influence of the logical form. A computational model of this learning task is presented. In order to represent the prior knowledge of the subjects, an extension to explanation-based learning is developed to deal with imprecise domain knowledge
Recommended from our members
Discovering qualitative empirical laws
In this paper we describe GLAUBER, an AI system that models the scientific discovery of qualitative empirical laws. We have tested the system on data from the history of early chemistry, and it has rediscovered such concepts as acids, alkalis, and salts, as well as laws relating these concepts. After discussing GLAUBER we examine the program's relation to other discovery systems, particularly methods for conceptual clustering and language acquisition
- …