63 research outputs found

    FUNCTION GROUP SELECTION OF SEMBUNG LEAVES (BLUMEA BALSAMIFERA) SIGNIFICANT TO ANTIOXIDANTS USING OVERLAPPING GROUP LASSO

    Get PDF
    Functional groups of sembung leaf metabolites can be detected using FTIR spectrometry by looking at the spectrum's shape from specific peaks that indicate the type of functional group of a compound. There were 35 observations and 1866 explanatory variables (wavelength) in this study. The number of explanatory variables more than the number of observations is high-dimensional data. One method that can be used to analyze high-dimensional data is penalized regression. The overlapping group lasso method is a development of the group-based penalized regression method that can solve the problem of selecting variable groups and members of overlapping groups of variables. The results of selecting the variable groups using the overlapping group lasso method found that the functional groups that were significant for the antioxidants of sembung leaves were C=C Unstructured, CN amide, Polyphenol, Sio2

    Shakeout: A New Approach to Regularized Deep Neural Network Training

    Full text link
    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines L0L_0, L1L_1 and L2L_2 regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.Comment: Appears at T-PAMI 201
    • …
    corecore