52,294 research outputs found

    A Novel Scheme for Intelligent Recognition of Pornographic Images

    Full text link
    Harmful contents are rising in internet day by day and this motivates the essence of more research in fast and reliable obscene and immoral material filtering. Pornographic image recognition is an important component in each filtering system. In this paper, a new approach for detecting pornographic images is introduced. In this approach, two new features are suggested. These two features in combination with other simple traditional features provide decent difference between porn and non-porn images. In addition, we applied fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron) and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of system was evaluated over 18354 download images from internet. The attained precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on test dataset. Achieved results verify the performance of proposed system versus other related works

    HyperAdam: A Learnable Task-Adaptive Adam for Network Training

    Full text link
    Deep neural networks are traditionally trained using human-designed stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as \textit{HyperAdam}, is proposed that combines the idea of "learning to optimize" and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM

    Using Kernel Perceptrons to Learn Action Effects for Planning

    Get PDF
    Abstract — We investigate the problem of learning action effects in STRIPS and ADL planning domains. Our approach is based on a kernel perceptron learning model, where action and state information is encoded in a compact vector representation as input to the learning mechanism, and resulting state changes are produced as output. Empirical results of our approach indicate efficient training and prediction times, with low average error rates (< 3%) when tested on STRIPS and ADL versions of an object manipulation scenario. This work is part of a project to integrate machine learning techniques with a planning system, as part of a larger cognitive architecture linking a highlevel reasoning component with a low-level robot/vision system. I

    Playing Billiard in Version Space

    Full text link
    A ray-tracing method inspired by ergodic billiards is used to estimate the theoretically best decision rule for a set of linear separable examples. While the Bayes-optimum requires a majority decision over all Perceptrons separating the example set, the problem considered here corresponds to finding the single Perceptron with best average generalization probability. For randomly distributed examples the billiard estimate agrees with known analytic results. In real-life classification problems the generalization error is consistently reduced compared to the maximal stability Perceptron.Comment: uuencoded, gzipped PostScript file, 127576 bytes To recover 1) save file as bayes.uue. Then 2) uudecode bayes.uue and 3) gunzip bayes.ps.g

    Structured Training for Neural Network Transition-Based Parsing

    Full text link
    We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy
    corecore