85 research outputs found

    Explicit Computation of Input Weights in Extreme Learning Machines

    Full text link
    We present a closed form expression for initializing the input weights in a multi-layer perceptron, which can be used as the first step in synthesis of an Extreme Learning Ma-chine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the absence of supervised training for the input weights, random linear combinations of training data samples are used to project the input data to a higher dimensional hidden layer. The hidden layer weights are solved in the standard ELM fashion by computing the pseudoinverse of the hidden layer outputs and multiplying by the desired output values. All weights for this method can be computed in a single pass, and the resulting networks are more accurate and more consistent on some standard problems than regular ELM networks of the same size.Comment: In submission for the ELM 2014 Conferenc

    Accurate pedestrian localization in overhead depth images via Height-Augmented HOG

    Get PDF
    We tackle the challenge of reliably and automatically localizing pedestrians in real-life conditions through overhead depth imaging at unprecedented high-density conditions. Leveraging upon a combination of Histogram of Oriented Gradients-like feature descriptors, neural networks, data augmentation and custom data annotation strategies, this work contributes a robust and scalable machine learning-based localization algorithm, which delivers near-human localization performance in real-time, even with local pedestrian density of about 3 ped/m2, a case in which most stateof- the art algorithms degrade significantly in performance

    Spatial Multi-Layer Perceptron Model for Predicting Dengue Fever Outbreaks in Surabaya

    Get PDF
    Dengue fever (DF) is a tropical disease spread by mosquitoes of the Aedes type. Therefore, a DF outbreak needs to be predicted to minimize the spread and death caused by it. The spread of dengue fever is a spatial problem. In this paper, we adopted the Multi Linear Perceptron (MLP) to solve the spatial problem, and we called it a spatial multi-layer perceptron model (Spatial MLP). In this proposed model, we consider two types of input neurons in the Spatial MLP, a region and the neighbourhood of that region. The spatial inputs dynamically change to the region. Additionally, the neighbourhood numbers of a region are also varied. So, the spatial inputs are changed in terms of the number of inputs and the neighbourhoods. As a result, the proposed model is outperformed the traditional MLP since it can adapt to the neighbourhoods. We can conclude the spatial MLP model can manage the information and predict the dengue fever outbreak in Surabay

    Learnable Pooling Regions for Image Classification

    Get PDF
    Biologically inspired, from the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms showing that the smooth regularization term is crucial to achieve strong performance using the presented architecture. Finally, we propose an efficient and parallel method to train the model. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter

    A computationally and cognitively plausible model of supervised and unsupervised learning

    Get PDF
    Author version made available in accordance with the publisher's policy. "The final publication is available at link.springer.com”The issue of chance correction has been discussed for many decades in the context of statistics, psychology and machine learning, with multiple measures being shown to have desirable properties, including various definitions of Kappa or Correlation, and the psychologically validated ΔP measures. In this paper, we discuss the relationships between these measures, showing that they form part of a single family of measures, and that using an appropriate measure can positively impact learning

    Practical recommendations for gradient-based training of deep architectures

    Full text link
    Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyper-parameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures
    • …
    corecore