1 research outputs found
Recommended from our members
N-learners problem: Fusion of concepts
We are given N learners each capable of learning concepts (subsets) of a domain set X in the sense of Valiant, i.e. for any c {element of} C {improper subset} 2{sup X}, given a finite set of examples of the form ; ;...; generated according to an unknown probability distribution P{sub X} on X, each learner produces a close approximation to c with a high probability. We are interested in combining the N learners using a single fuser or consolidator. We consider the paradigm of passive fusion, where each learner is first trained with the sample without the influence of the consolidator. The composite system is constituted by the fuser and the individual learners. We consider two cases: open and closed fusion. In open fusion the fuser is given the sample and the hypotheses of the individual learners; we show that the fusion rule can be obtained by formulating this problem as another learning problem. For the case all individual learners are trained with the same sample, we show sufficiency conditions that ensure the composite system to be better than the best of the individual: the hypothesis space of the consolidator (a) satisfies the isolation property of degree at least N, and (b) has Vapnik-Chervonenkis dimension less than or equal to that of every individual learner. If individual learners are trained by independently generated samples, we obtain a much weaker bound on the VC-dimension of the hypothesis space of the fuser. Second, in closed fusion the fuser does not have an access to either the training sample or the hypotheses of the individual learners. By suitable designing a linear threshold function of the outputs of individual learners, we show that the composite system can be made better than the best of the learners