4 research outputs found
ON LEARNING DECISION TREES WITH LARGE OUTPUT DOMAINS
For two disjoint sets of variables, X and Y, and a class
of functions C, we define DT(X,Y,C) to be the class of all
decision trees over X whose leaves are functions from C over
Y. We study the learnability of DT(X,Y,C) using membership and
equivalence queries. Boolean decision trees, DT(X,,{0,1}),
were shown to be exactly learnable in [Bs93] but does this imply the
learnability of decision trees that have non-boolean leaves? A simple
encoding of all possible leaf values will work provided that the size of
C is reasonable. Our investigation involves several cases
where simple encoding is not feasible, i.e., when | C | is large.
We show how to learn decision trees whose leaves are learnable concepts
belonging to a class C, DT(X,Y,C), when the separation between the
variables X and Y is known. A simple algorithm for decision
trees whose leaves are constants, DT(X,,C), is also
presented.
Each case above requires at least s separate executions of the
algorithm from [Bs93] where s is the number of distinct leaves of
the tree but we show that if C is a bounded lattice,
DT(X,,C) is learnable using only one execution of this
algorithm.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]