2,570 research outputs found

    Introduction to questionnaire data patterns using Perceptron Algorithm for lecturer improvement and development in higher education

    Get PDF
    This artificial neural network research article aimed to overview the current teaching process focusing on lecturer performance using the perceptron algorithm, improving the teaching process, developing the lecturer based on the perceptron algorithm's results, evaluating the speed and accuracy of the perceptron algorithm in evaluating performance lecturer and learning the rules of the perceptron algorithm in processing assessment criteria for lecturers in tertiary institutions. In this case, the perceptron algorithm was used to recognize the questionnaire data input patterns. The perceptron algorithm was trained and tested to recognize input data patterns so that this neural network could identify input data patterns from questionnaire data.This artificial neural network research article aimed to overview the current teaching process focusing on lecturer performance using the perceptron algorithm, improving the teaching process, developing the lecturer based on the perceptron algorithm's results, evaluating the speed and accuracy of the perceptron algorithm in evaluating performance lecturer and learning the rules of the perceptron algorithm in processing assessment criteria for lecturers in tertiary institutions. In this case, the perceptron algorithm was used to recognize the questionnaire data input patterns. The perceptron algorithm was trained and tested to recognize input data patterns so that this neural network could identify input data patterns from questionnaire data

    A second-order perceptron algorithm

    Get PDF

    The perceptron algorithm versus winnow: linear versus logarithmic mistake bounds when few input variables are relevant

    Get PDF
    AbstractWe give an adversary strategy that forces the Perceptron algorithm to make Ω(kN) mistakes in learning monotone disjunctions over N variables with at most k literals. In contrast, Littlestone's algorithm Winnow makes at most O(k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. In general, we call an algorithm additive if its weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. Thus, the Perceptron algorithm is an example of an additive algorithm. We show that an adversary can force any additive algorithm to make (N + k −1)2 mistakes in learning a monotone disjunction of at most k literals. Simple experiments show that for k ⪡ N, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data

    An Efficient Re-Scaled Perceptron Algorithm for Conic Systems

    Get PDF
    The classical perceptron algorithm is an elementary row-action/relaxation algorithm for solving a homogeneous linear inequality system Ax > 0. A natural condition measure associated with this algorithm is the Euclidean width T of the cone of feasible solutions, and the iteration complexity of the perceptron algorithm is bounded by 1/T^2, see Rosenblatt 1962. Dunagan and Vempala have developed a re-scaled version of the perceptron algorithm with an improved complexity of O(n ln(1/T)) iterations (with high probability), which is theoretically efficient in T, and in particular is polynomial-time in the bit-length model. We explore extensions of the concepts of these perceptron methods to the general homogeneous conic system Ax is an element of a set int K where K is a regular convex cone. We provide a conic extension of the re-scaled perceptron algorithm based on the notion of a deep-separation oracle of a cone, which essentially computes a certificate of strong separation. We give a general condition under which the re-scaled perceptron algorithm is itself theoretically efficient; this includes the cases when K is the cross-product of half-spaces, second-order cones, and the positive semi-definite cone

    Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

    Full text link
    The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an exponential number of mistakes even when learning simple functions. We then consider the question of whether kernel functions can analogously be used to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound in this setting. However, we prove that it is computationally hard to simulate Winnows behavior for learning DNF over such a feature set. This implies that the kernel functions which correspond to running Winnow for this problem are not efficiently computable, and that there is no general construction that can run Winnow with kernels

    Selective Sampling with Drift

    Full text link
    Recently there has been much work on selective sampling, an online active learning setting, in which algorithms work in rounds. On each round an algorithm receives an input and makes a prediction. Then, it can decide whether to query a label, and if so to update its model, otherwise the input is discarded. Most of this work is focused on the stationary case, where it is assumed that there is a fixed target model, and the performance of the algorithm is compared to a fixed model. However, in many real-world applications, such as spam prediction, the best target function may drift over time, or have shifts from time to time. We develop a novel selective sampling algorithm for the drifting setting, analyze it under no assumptions on the mechanism generating the sequence of instances, and derive new mistake bounds that depend on the amount of drift in the problem. Simulations on synthetic and real-world datasets demonstrate the superiority of our algorithms as a selective sampling algorithm in the drifting setting
    • …
    corecore