28 research outputs found

    Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    Get PDF
    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis

    Learning Linear Threshold Functions in the Presence of Classification Noise

    No full text
    I show that linear threshold functions are polynomially learnable in the presence of classification noise, i.e., polynomial in n, 1=ffl, 1=ffi, and 1=oe, where n is the number of Boolean attributes, ffl and ffi are the usual accuracy and confidence parameters, and oe indicates the minimum distance of any example from the target hyperplane, which is assumed to be lower than the average distance of the examples from any hyperplane. This result is achieved by modifying the Perceptron algorithm---for each update, a weighted average of a sample of misclassified examples and a correction vector is added to the current weight vector. Similar modifications are shown for the Weighted Majority algorithm. The correction vector is simply the mean of the normalized examples. In the special case of Boolean threshold functions, the modified Perceptron algorithm performs O(n 2 ffl \Gamma2 ) iterations over O(n 4 ffl \Gamma2 ln(n=(ffiffl))) examples. This improves on the pre..

    Polynomial Learnability of Linear Threshold Approximations

    No full text
    I demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions. The central result is as follows. Suppose there exists a vector ~ w of n weights (including the threshold) with "accuracy " 1 \Gamma ff, "average error" j, and "balancing separation" oe, i.e., with probability 1 \Gamma ff, ~ w correctly classifies an example ~x; over examples incorrectly classified by ~ w , the expected value of j ~ w \Delta ~xj is j (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of j ~ w \Delta ~xj is oe. Then, both the perceptron and weighted majority algorithms achieve an online accuracy (i.e., accuracy over all examples) at least 1 \Gamma (ffl + ff(1 + j=oe)) with confidence at least 1 \Gamma ffi in time and number of examples polynomial in 1=oe, 1=ffl, 1=ffi, and n. 1 Introduction Recently, it has been shown that learning probably approximately optimal (PAO) linear-threshold funct..
    corecore