1 research outputs found

    Representation learning by hierarchical ELM auto‐encoder with double random hidden layers

    No full text
    Recent research developments of extreme learning machine (ELM) with multilayer network architecture lead to a promising high performance with extremely fast training speed for representation learning. In this work, the authors are dedicated to develop an efficient and expressive representation learning method with hierarchical ELM, and proposing a novel architectural unit named as double random hidden layers ELM auto‐encoder (DELM‐AE). The novel DELM‐AE consists of one input layer, two random hidden mapping layers for encoding feature, and one output layer for decoding feature. When stacking DELM‐AE in the hierarchical structure, they can construct an H‐DELM model, where the input of the current AE is the feature representation learned by the previous one, but the output is identical to the original input information and is not the input. Hence, the H‐DELM can reproduce the original input data as much as possible to learn more expressive and compact feature. They validate their method on various widely public datasets, and the results demonstrate that H‐DELM can bring significant performance improvements in terms of classification accuracy and robustness compared with existing relevant multilayer ELM and other deep learning algorithms at a slight computational cost
    corecore