51,738 research outputs found

    An Analysis of Reduced Error Pruning

    Full text link
    Top-down induction of decision trees has been observed to suffer from the inadequate functioning of the pruning phase. In particular, it is known that the size of the resulting tree grows linearly with the sample size, even though the accuracy of the tree does not improve. Reduced Error Pruning is an algorithm that has been used as a representative technique in attempts to explain the problems of decision tree learning. In this paper we present analyses of Reduced Error Pruning in three different settings. First we study the basic algorithmic properties of the method, properties that hold independent of the input decision tree and pruning examples. Then we examine a situation that intuitively should lead to the subtree under consideration to be replaced by a leaf node, one in which the class label and attribute values of the pruning examples are independent of each other. This analysis is conducted under two different assumptions. The general analysis shows that the pruning probability of a node fitting pure noise is bounded by a function that decreases exponentially as the size of the tree grows. In a specific analysis we assume that the examples are distributed uniformly to the tree. This assumption lets us approximate the number of subtrees that are pruned because they do not receive any pruning examples. This paper clarifies the different variants of the Reduced Error Pruning algorithm, brings new insight to its algorithmic properties, analyses the algorithm with less imposed assumptions than before, and includes the previously overlooked empty subtrees to the analysis

    Learning Small Trees and Graphs that Generalize

    Get PDF
    In this Thesis we study issues related to learning small tree and graph formed classifiers. First, we study reduced error pruning of decision trees and branching programs. We analyze the behavior of a reduced error pruning algorithm for decision trees under various probabilistic assumptions on the pruning data. As a result we get, e.g., new upper bounds for the probability of replacing a tree that fits random noise by a leaf. In the case of branching programs we show that the existence of an efficient approximation algorithm for reduced error pruning would imply P=NP. This indicates that reduced error pruning of branching programs is most likely impossible in practice, even though the corresponding problem for decision trees is easily solvable in linear time. The latter part of the Thesis is concerned with generalization error analysis, more particularly on Rademacher penalization applied to small or otherwise restricted decision trees. We develop a progressive sampling method based on Rademacher penalization that yields reasonable data dependent sample complexity estimate

    Measurement of body temperature and heart rate for the development of healthcare system using IOT platform

    Get PDF
    Health can be define as a state of complete mental, physical and social well-being and not merely the absence of disease or infirmity according to the World Health Organization (WHO) [1]. Having a healthy body is the greatest blessing of life, hence healthcare is required to maintain or improve the health since the healthcare is the maintenance or improvement of health through the diagnosis, prevention, and treatment of injury, disease, illness, and other mental and physical impairments in human beings. The novel paradigm of Internet of Things (IoT) has the potential to transform modern healthcare and improve the well-being of entire society [2]. IoT is a concept aims to connec

    Statistical Pruning for Near Maximum Likelihood Detection of MIMO Systems

    Get PDF
    We show a statistical pruning approach for maximum likelihood (ML) detection of multiple-input multiple-output (MIMO) systems. We present a general pruning strategy for sphere decoder (SD), which can also be applied to any tree search algorithms. Our pruning rules are effective especially for the case when SD has high complexity. Three specific pruning rules are given and discussed. From analyzing the union bound on the symbol error probability, we show that the diversity order of the deterministic pruning is only one by fixing the pruning probability. By choosing different pruning probability distribution functions, the statistical pruning can achieve arbitrary diversity orders and SNR gains. Our statistical pruning strategy thus achieves a flexible trade-off between complexity and performance

    A Split-Reduced Successive Cancellation List Decoder for Polar Codes

    Full text link
    This paper focuses on low complexity successive cancellation list (SCL) decoding of polar codes. In particular, using the fact that splitting may be unnecessary when the reliability of decoding the unfrozen bit is sufficiently high, a novel splitting rule is proposed. Based on this rule, it is conjectured that, if the correct path survives at some stage, it tends to survive till termination without splitting with high probability. On the other hand, the incorrect paths are more likely to split at the following stages. Motivated by these observations, a simple counter that counts the successive number of stages without splitting is introduced for each decoding path to facilitate the identification of correct and incorrect path. Specifically, any path with counter value larger than a predefined threshold \omega is deemed to be the correct path, which will survive at the decoding stage, while other paths with counter value smaller than the threshold will be pruned, thereby reducing the decoding complexity. Furthermore, it is proved that there exists a unique unfrozen bit u_{N-K_1+1}, after which the successive cancellation decoder achieves the same error performance as the maximum likelihood decoder if all the prior unfrozen bits are correctly decoded, which enables further complexity reduction. Simulation results demonstrate that the proposed low complexity SCL decoder attains performance similar to that of the conventional SCL decoder, while achieving substantial complexity reduction.Comment: Accepted for publication in IEEE Journal on Selected Areas in Communications - Special Issue on Recent Advances In Capacity Approaching Code

    An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

    Get PDF
    We empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43% of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43% to 25%.Comment: To appear at the DSN 2020 conferenc
    corecore