15 research outputs found
An Approach to Incremental Learning Good Classification Tests
An algorithm of incremental mining implicative logical rules is pro-posed. This algorithm is based on constructing good classification tests. The in-cremental approach to constructing these rules allows revealing the interde-pendence between two fundamental components of human thinking: pattern recognition and knowledge acquisition
Intelligentes Marketing durch adaptive Produktpräsentation im Web
Mit zunehmender Verbreitung entwickelt sich das World Wide Web zu einem von Produkt- und Marketing-Infomationen dominierten Dienst, in dem verstärkt multimediale Gestaltungsmittel von Interesse sind. Da diese Präsentationen wegen des hohen Datenvolumens und der verschiedenen Hard- und Software-umgebungen der Kunden nicht nur auf Begeisterung stoßen, ist es jedoch wünschens-wert, die Darstellung der Produkte individuell an die Interessen und Vorlieben der Kunden zu adaptieren. In diesem Beitrag wird das TELLIM-System vorgestellt, welches den Kunden bei seinen Interaktionen mit den multimedialen Präsentationselementen beobachtet und daraufhin die Produktpräsentationen mit Hilfe eines inkrementeilen Lemverfahrens zur Laufzeit auf den einzelnen Kunden abstimmt
Machine Learning: The Necessity of Order (is order in order ?)
In myriad of human-tailored activities, whether in the classroom or listening to a story, human learners receive selected pieces of information, presented in a chosen order and pace. This is what it takes to facilitate learning. Yet, when machine learners exhibited sequencing effects, showing that some data sampling, ordering and tempo are better than others, it almost came as a surprise. Seemingly simple questions had suddenly to be thought anew : what are good training data? How to select them? How to present them? Why is it that there are sequencing effects? How to measure them? Should we try to avoid them or take advantage of them? This chapter is intended to present ideas and directions of research that are currently studied in the machine learning field to answer these questions and others. As any other science, machine learning strives to develop models that stress fundamental aspects of the phenomenon under study. The basic concepts and models developed in machine learning are presented here, as well as some of the findings that may have significance and counterparts in related disciplines interested in learning and education
Iterative Optimization and Simplification of Hierarchical Clusterings
Clustering is often used for discovering structure in data. Clustering
systems differ in the objective function used to evaluate clustering quality
and the control strategy used to search the space of clusterings. Ideally, the
search strategy should consistently construct clusterings of high quality, but
be computationally inexpensive as well. In general, we cannot have it both
ways, but we can partition the search so that a system inexpensively constructs
a `tentative' clustering for initial examination, followed by iterative
optimization, which continues to search in background for improved clusterings.
Given this motivation, we evaluate an inexpensive strategy for creating initial
clusterings, coupled with several control strategies for iterative
optimization, each of which repeatedly modifies an initial clustering in search
of a better one. One of these methods appears novel as an iterative
optimization strategy in clustering contexts. Once a clustering has been
constructed it is judged by analysts -- often according to task-specific
criteria. Several authors have abstracted these criteria and posited a generic
performance task akin to pattern completion, where the error rate over
completed patterns is used to `externally' judge clustering utility. Given this
performance task, we adapt resampling-based pruning strategies used by
supervised learning systems to the task of simplifying hierarchical
clusterings, thus promising to ease post-clustering analysis. Finally, we
propose a number of objective functions, based on attribute-selection measures
for decision-tree induction, that might perform well on the error rate and
simplicity dimensions.Comment: See http://www.jair.org/ for any accompanying file
Implementation of decision trees for embedded systems
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm.
Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree.
The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time.
The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time