Skip to main content
Article thumbnail
Location of Repository

Learning multivalued multithreshold functions

By Martin Anthony


This paper concerns multivalued multithreshold functions, {0, 1, . . . , k}-valued functions on Rn that may be considered as generalizations of (linear) threshold functions, or as discretized versions of artificial neurons. Such functions have arisen in the context of multiple-valued logic and artificial neural networks. For any fixed k, we present two procedures which, given a set of points labelled with the values of some (unknown) multivalued multithreshold function, will produce such a function that achieves the same classifications on the points. (That is, we present ‘consistent hypothesis finders’.) One of these is based on linear programming, and the other on an ‘incremental’ procedure suggested by Obradovi´c and Parberry. In standard probabilistic models of learning, it is natural to ask for some information about how many labelled data points should be used as the basis for valid inference about the function that is labelling the data. We investigate this question for the class of multivalued multithreshold functions. Finally, we examine multithreshold functions, a class of {0, 1}-valued functions related to the multivalued multithreshold functions. We give a simple description of an algorithm based on a procedure suggested by Takiyama, and we raise some open questions on the effectiveness of this algorithm, and, generally, on the complexity of finding consistent hypotheses for samples of multithreshold functions

Topics: QA Mathematics
Publisher: Centre for Discrete and Applicable Mathematics, London School of Economics and Political Science
Year: 2003
OAI identifier:
Provided by: LSE Research Online
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.