Learning multivalued multithreshold functions

Abstract

This paper concerns multivalued multithreshold functions, {0, 1, . . . , k}-valued functions on Rn that may be considered as generalizations of (linear) threshold functions, or as discretized versions of artificial neurons. Such functions have arisen in the context of multiple-valued logic and artificial neural networks. For any fixed k, we present two procedures which, given a set of points labelled with the values of some (unknown) multivalued multithreshold function, will produce such a function that achieves the same classifications on the points. (That is, we present ‘consistent hypothesis finders’.) One of these is based on linear programming, and the other on an ‘incremental’ procedure suggested by Obradovi´c and Parberry. In standard probabilistic models of learning, it is natural to ask for some information about how many labelled data points should be used as the basis for valid inference about the function that is labelling the data. We investigate this question for the class of multivalued multithreshold functions. Finally, we examine multithreshold functions, a class of {0, 1}-valued functions related to the multivalued multithreshold functions. We give a simple description of an algorithm based on a procedure suggested by Takiyama, and we raise some open questions on the effectiveness of this algorithm, and, generally, on the complexity of finding consistent hypotheses for samples of multithreshold functions

Similar works

Full text

thumbnail-image

LSE Research Online

redirect
Last time updated on 10/02/2012

This paper was published in LSE Research Online.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.