7,424 research outputs found
Throughput and Latency in Finite-Buffer Line Networks
This work investigates the effect of finite buffer sizes on the throughput
capacity and packet delay of line networks with packet erasure links that have
perfect feedback. These performance measures are shown to be linked to the
stationary distribution of an underlying irreducible Markov chain that models
the system exactly. Using simple strategies, bounds on the throughput capacity
are derived. The work then presents two iterative schemes to approximate the
steady-state distribution of node occupancies by decoupling the chain to
smaller queueing blocks. These approximate solutions are used to understand the
effect of buffer sizes on throughput capacity and the distribution of packet
delay. Using the exact modeling for line networks, it is shown that the
throughput capacity is unaltered in the absence of hop-by-hop feedback provided
packet-level network coding is allowed. Finally, using simulations, it is
confirmed that the proposed framework yields accurate estimates of the
throughput capacity and delay distribution and captures the vital trends and
tradeoffs in these networks.Comment: 19 pages, 14 figures, accepted in IEEE Transactions on Information
Theor
Comparative Study Of Congestion Control Techniques In High Speed Networks
Congestion in network occurs due to exceed in aggregate demand as compared to
the accessible capacity of the resources. Network congestion will increase as
network speed increases and new effective congestion control methods are
needed, especially to handle bursty traffic of todays very high speed networks.
Since late 90s numerous schemes i.e. [1]...[10] etc. have been proposed. This
paper concentrates on comparative study of the different congestion control
schemes based on some key performance metrics. An effort has been made to judge
the performance of Maximum Entropy (ME) based solution for a steady state
GE/GE/1/N censored queues with partial buffer sharing scheme against these key
performance metrics.Comment: 10 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis
QoE-Based Low-Delay Live Streaming Using Throughput Predictions
Recently, HTTP-based adaptive streaming has become the de facto standard for
video streaming over the Internet. It allows clients to dynamically adapt media
characteristics to network conditions in order to ensure a high quality of
experience, that is, minimize playback interruptions, while maximizing video
quality at a reasonable level of quality changes. In the case of live
streaming, this task becomes particularly challenging due to the latency
constraints. The challenge further increases if a client uses a wireless
network, where the throughput is subject to considerable fluctuations.
Consequently, live streams often exhibit latencies of up to 30 seconds. In the
present work, we introduce an adaptation algorithm for HTTP-based live
streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is
designed to operate with a transport latency of few seconds. To reach this
goal, LOLYPOP leverages TCP throughput predictions on multiple time scales,
from 1 to 10 seconds, along with an estimate of the prediction error
distribution. In addition to satisfying the latency constraint, the algorithm
heuristically maximizes the quality of experience by maximizing the average
video quality as a function of the number of skipped segments and quality
transitions. In order to select an efficient prediction method, we studied the
performance of several time series prediction methods in IEEE 802.11 wireless
access networks. We evaluated LOLYPOP under a large set of experimental
conditions limiting the transport latency to 3 seconds, against a
state-of-the-art adaptation algorithm from the literature, called FESTIVE. We
observed that the average video quality is by up to a factor of 3 higher than
with FESTIVE. We also observed that LOLYPOP is able to reach a broader region
in the quality of experience space, and thus it is better adjustable to the
user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group,
Technische Universitaet Berlin. This TR updated TR TKN-15-00
Transformations of High-Level Synthesis Codes for High-Performance Computing
Specialized hardware architectures promise a major step in performance and
energy efficiency over the traditional load/store devices currently employed in
large scale computing systems. The adoption of high-level synthesis (HLS) from
languages such as C/C++ and OpenCL has greatly increased programmer
productivity when designing for such platforms. While this has enabled a wider
audience to target specialized hardware, the optimization principles known from
traditional software design are no longer sufficient to implement
high-performance codes. Fast and efficient codes for reconfigurable platforms
are thus still challenging to design. To alleviate this, we present a set of
optimizing transformations for HLS, targeting scalable and efficient
architectures for high-performance computing (HPC) applications. Our work
provides a toolbox for developers, where we systematically identify classes of
transformations, the characteristics of their effect on the HLS code and the
resulting hardware (e.g., increases data reuse or resource consumption), and
the objectives that each transformation can target (e.g., resolve interface
contention, or increase parallelism). We show how these can be used to
efficiently exploit pipelining, on-chip distributed fast memory, and on-chip
streaming dataflow, allowing for massively parallel architectures. To quantify
the effect of our transformations, we use them to optimize a set of
throughput-oriented FPGA kernels, demonstrating that our enhancements are
sufficient to scale up parallelism within the hardware constraints. With the
transformations covered, we hope to establish a common framework for
performance engineers, compiler developers, and hardware developers, to tap
into the performance potential offered by specialized hardware architectures
using HLS
- …