5,493 research outputs found

    Self-generated Self-similar Traffic

    Get PDF
    Self-similarity in the network traffic has been studied from several aspects: both at the user side and at the network side there are many sources of the long range dependence. Recently some dynamical origins are also identified: the TCP adaptive congestion avoidance algorithm itself can produce chaotic and long range dependent throughput behavior, if the loss rate is very high. In this paper we show that there is a close connection between the static and dynamic origins of self-similarity: parallel TCPs can generate the self-similarity themselves, they can introduce heavily fluctuations into the background traffic and produce high effective loss rate causing a long range dependent TCP flow, however, the dropped packet ratio is low.Comment: 8 pages, 12 Postscript figures, accepted in Nonlinear Phenomena in Complex System

    Modeling Scalability of Distributed Machine Learning

    Full text link
    Present day machine learning is computationally intensive and processes large amounts of data. It is implemented in a distributed fashion in order to address these scalability issues. The work is parallelized across a number of computing nodes. It is usually hard to estimate in advance how many nodes to use for a particular workload. We propose a simple framework for estimating the scalability of distributed machine learning algorithms. We measure the scalability by means of the speedup an algorithm achieves with more nodes. We propose time complexity models for gradient descent and graphical model inference. We validate our models with experiments on deep learning training and belief propagation. This framework was used to study the scalability of machine learning algorithms in Apache Spark.Comment: 6 pages, 4 figures, appears at ICDE 201
    • 

    corecore