146,398 research outputs found

    RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement

    Get PDF
    Extreme learning machine (ELM) as an emerging branch of shallow networks has shown its excellent generalization and fast learning speed. However, for blended data, the robustness of ELM is weak because its weights and biases of hidden nodes are set randomly. Moreover, the noisy data exert a negative effect. To solve this problem, a new framework called RMSE-ELM is proposed in this paper. It is a two-layer recursive model. In the first layer, the framework trains lots of ELMs in different groups concurrently, then employs selective ensemble to pick out an optimal set of ELMs in each group, which can be merged into a large group of ELMs called candidate pool. In the second layer, selective ensemble is recursively used on candidate pool to acquire the final ensemble. In the experiments, we apply UCI blended datasets to confirm the robustness of our new approach in two key aspects (mean square error and standard deviation). The space complexity of our method is increased to some degree, but the results have shown that RMSE-ELM significantly improves robustness with slightly computational time compared with representative methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential framework to solve robustness issue of ELM for high-dimensional blended data in the future.Comment: Accepted for publication in Mathematical Problems in Engineering, 09/22/201

    Robust transceiver design for MIMO relay systems with tomlinson harashima precoding

    Get PDF
    In this paper we consider a robust transceiver design for two hop non-regenerative multiple-input multiple-output (MIMO) relay networks with imperfect channel state information (CSI). The transceiver consists of Tomlinson Harashima Pre-coding (THP) at the source with a linear precoder at the relay and linear equalisation at the destination. Under the assumption that each node in the network can acquire statistical knowledge of the channel in the form of a channel mean and estimation error covariance, we optimise the processors to minimise the expected arithmetic mean square error (MSE) subject to transmission power constraints at the source and relay. Simulation results demonstrate the robustness of the proposed transceiver design to channel estimation errors

    LOT: Layer-wise Orthogonal Training on Improving 2\ell_2 Certified Robustness

    Full text link
    Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints are able to enhance adversarial robustness and other model properties such as stability. In this paper, we propose a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers via parametrizing an orthogonal matrix with an unconstrained matrix. We then efficiently compute the inverse square root of a convolution kernel by transforming the input domain to the Fourier frequency domain. On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models. We conduct comprehensive evaluations for LOT under different settings. We show that LOT significantly outperforms baselines regarding deterministic l2 certified robustness, and scales to deeper neural networks. Under the supervised scenario, we improve the state-of-the-art certified robustness for all architectures (e.g. from 59.04% to 63.50% on CIFAR-10 and from 32.57% to 34.59% on CIFAR-100 at radius rho = 36/255 for 40-layer networks). With semi-supervised learning over unlabelled data, we are able to improve state-of-the-art certified robustness on CIFAR-10 at rho = 108/255 from 36.04% to 42.39%. In addition, LOT consistently outperforms baselines on different model architectures with only 1/3 evaluation time.Comment: NeurIPS 202

    Performance Measure of Hierarchical Structures for Multi-agent Systems

    Get PDF
    This paper investigates the robustness of linear consensus networks which are designed under a hierarchical scheme based on Cartesian product. For robustness analysis, consensus networks are subjected to additive white Gaussian noise. To quantify the robustness of the network, we use ℌ2-norm: the square root of the expected value of the steady state dispersion of network states. We compare several classes of undirected and directed graph topologies. We show that the hierarchical structures, designed under the Cartesian product-based hierarchy, outperform the single-layer structures in terms of robustness. We provide simulations to support the analytical results presented in this paper.acceptedVersionPeer reviewe

    Robustness of correlated networks against propagating attacks

    Full text link
    We investigate robustness of correlated networks against propagating attacks modeled by a susceptible-infected-removed model. By Monte-Carlo simulations, we numerically determine the first critical infection rate, above which a global outbreak of disease occurs, and the second critical infection rate, above which disease disintegrates the network. Our result shows that correlated networks are robust compared to the uncorrelated ones, regardless of whether they are assortative or disassortative, when a fraction of infected nodes in an initial state is not too large. For large initial fraction, disassortative network becomes fragile while assortative network holds robustness. This behavior is related to the layered network structure inevitably generated by a rewiring procedure we adopt to realize correlated networks.Comment: 6 pages, 13 figure
    corecore