12 research outputs found

    Efficient distributed machine learning via combinatorial multi-armed bandits

    Get PDF
    We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers from which at most b ≤ n can be utilized in parallel. By assigning tasks to all the workers and waiting only for the k fastest ones, the main node can trade-off the error of the algorithm with its runtime by gradually increasing k as the algorithm evolves. However, this strategy, referred to as adaptive k-sync, can incur additional costs since it ignores the computational efforts of slow workers. We propose a cost-efficient scheme that assigns tasks only to k workers and gradually increases k. As the response times of the available workers are unknown to the main node a priori, we utilize a combinatorial multi-armed bandit model to learn which workers are the fastest while assigning gradient calculations, and to minimize the effect of slow workers. Assuming that the mean response times of the workers are independent and exponentially distributed with different means, we give empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent to learn the mean response times of the workers. Compared to adaptive k-sync, our scheme achieves significantly lower errors with the same computational efforts while being inferior in terms of speed

    Column Rank Distances of Rank Metric Convolutional Codes

    Get PDF
    In this paper, we deal with the so-called multi-shot network coding, meaning that the network is used several times (shots) to propagate the information. The framework we present is slightly more general than the one which can be found in the literature. We study and introduce the notion of column rank distance of rank metric convolutional codes for any given rate and finite field. Within this new framework we generalize previous results on column distances of Hamming and rank metric convolutional codes [3, 8]. This contribution can be considered as a continuation follow-up of the work presented in [10]

    Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits

    No full text
    We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers from which at most b ≤ n can be utilized in parallel. By assigning tasks to all the workers and waiting only for the k fastest ones, the main node can trade-off the error of the algorithm with its runtime by gradually increasing k as the algorithm evolves. However, this strategy, referred to as adaptive k-sync, can incur additional costs since it ignores the computational efforts of slow workers. We propose a cost-efficient scheme that assigns tasks only to k workers and gradually increases k. As the response times of the available workers are unknown to the main node a priori, we utilize a combinatorial multi-armed bandit model to learn which workers are the fastest while assigning gradient calculations, and to minimize the effect of slow workers. Assuming that the mean response times of the workers are independent and exponentially distributed with different means, we give empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent to learn the mean response times of the workers. Compared to adaptive k-sync, our scheme achieves significantly lower errors with the same computational efforts while being inferior in terms of speed

    Decoding of interleaved alternant codes

    No full text

    Weak Keys in the Faure-Loidreau Cryptosystem

    Get PDF
    Some types of weak keys in the Faure-Loidreau (FL) cryptosystem are presented. We show that from such weak keys the private key can be reconstructed with a computational effort that is substantially lower than the security level. The proposed key-recovery attack is based on ideas of generalized minimum distance (GMD) decoding for rank-metric codes
    corecore