7 research outputs found

    Multilayer probability extreme learning machine for device-free localization

    Get PDF
    Device-free localization (DFL) is becoming one of the new techniques in wireless localization field, due to its advantage that the target to be localized does not need to attach any electronic device. One of the key issues of DFL is how to characterize the influence of the target on the wireless links, such that the target’s location can be accurately estimated by analyzing the changes of the signals of the links. Most of the existing related research works usually extract the useful information from the links through manual approaches, which are labor-intensive and time-consuming. Deep learning approaches have attempted to automatically extract the useful information from the links, but the training of the conventional deep learning approaches are time-consuming, because a large number of parameters need to be fine-tuned multiple times. Motivated by the fast learning speed and excellent generalization performance of extreme learning machine (ELM), which is an emerging training approach for generalized single hidden layer feedforward neural networks (SLFNs), this paper proposes a novel hierarchical ELM based on deep learning theory, named multilayer probability ELM (MP-ELM), for automatically extracting the useful information from the links, and implementing fast and accurate DFL. The proposed MP-ELM is stacked by ELM autoencoders, so it also keeps the very fast learning speed of ELM. In addition, considering the uncertainty and redundant links existing in DFL, MP-ELM outputs the probabilistic estimation of the target’s location instead of the deterministic output. The validity of the proposed MP-ELM-based DFL is evaluated both in the indoor and the outdoor environments, respectively. Experimental results demonstrate that the proposed MP-ELM can obtain better performance compared with classic ELM, multilayer ELM (ML-ELM), hierarchical ELM (H-ELM), deep belief network (DBN), and deep Boltzmann machine (DBM)

    Robust Extreme Learning Machine for Modeling with Unknown Noise

    Get PDF
    Extreme learning machine (ELM) is an emerging machine learning technique for training single hidden layer feedforward networks (SLFNs). During the training phase, ELM model can be created by simultaneously minimizing the modeling errors and norm of the output weights. Usually, squared loss is widely utilized in the objective function of ELMs, which is theoretically optimal for the Gaussian error distribution. However, in practice, data collected from uncertain and heterogeneous environments trivially result in unknown noise, which may be very complex and cannot be described well using any single distribution. In order to tackle this issue, in this paper, a robust ELM (R-ELM) is proposed for improving the modeling capability and robustness with Gaussian and non-Gaussian noise. In R-ELM, a modified objective function is constructed to fit the noise using mixture of Gaussian (MoG) to approximate any continuous distribution. In addition, the corresponding solution for the new objective function is developed based on expectation maximization (EM) algorithm. Comprehensive experiments, both on selected benchmark datasets and real world applications, demonstrate that the proposed R-ELM has better robustness and generalization performance than state-of-the-art machine learning approaches

    Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines

    Get PDF
    In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning

    Kernel-Based Multilayer Extreme Learning Machines for Representation Learning

    No full text
    corecore