Skip to main content
Article thumbnail
Location of Repository

Learning Noisy Linear Threshold Functions

By Tom Bylander


This papers describes and analyzes algorithms for learning linear threshold function (LTFs) in the presence of classification noise and monotonic noise. When there is classification noise, each randomly drawn example is mislabeled (i.e., differs from the target LTF) with the same probability. For monotonic noise, the probability of mislabeling an example monotonically decreases with the separation between the target LTF hyperplane and the example. Monotonic noise is a generalization of classification noise as well as the cases of independent binary features (aka naive Bayes) and normal distributions with equal covariance matrices. Monotonic noise provides a more realistic model of noise because it allows confidence to increase as a function of the distance from the threshold, but it does not impose any artificial form on the function. This paper shows that LTFs are polynomially PAC-learnable in the presence of classification noise and monotonic noise if the separation between examples ..

Year: 1998
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.