368 research outputs found
Transition to High-Speed Networks — SuperJANET Experience
For the time being, trials to establish the Information Superhighway are booming. In Britain, JANET has provided wide-area computer communication, and has recently been upgraded to SuperJANET, increasing the throughput by a factor of five to 10 Mb/s, with some sites having PDH access at n × 34 Mb/s. In this paper, the technological changes seen from a user perspective are addressed. A multimedia communication-based distance learning project on SuperJANET is introduced and the network performance measurements for this project are presented. These measurements suggest the employment of reservation protocol and packet scheduling. We also provide a mechanism for on-the-fly playback of continuous media
Performance of Bursty World Wide Web (WWW) Sources over ABR
We model World Wide Web (WWW) servers and clients running over an ATM network
using the ABR (available bit rate) service. The WWW servers are modeled using a
variant of the SPECweb96 benchmark, while the WWW clients are based on a model
by Mah. The traffic generated by this application is typically bursty, i.e., it
has active and idle periods in transmission. A timeout occurs after given
amount of idle period. During idle period the underlying TCP congestion windows
remain open until a timeout expires. These open windows may be used to send
data in a burst when the application becomes active again. This raises the
possibility of large switch queues if the source rates are not controlled by
ABR. We study this problem and show that ABR scales well with a large number of
bursty TCP sources in the system.Comment: Submitted to WebNet `97, Toronto, November 9
Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library
Remote data access for data analysis in high performance computing is
commonly done with specialized data access protocols and storage systems. These
protocols are highly optimized for high throughput on very large datasets,
multi-streams, high availability, low latency and efficient parallel I/O. The
purpose of this paper is to describe how we have adapted a generic protocol,
the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative
for high performance I/O and data analysis applications in a global computing
grid: the Worldwide LHC Computing Grid. In this work, we first analyze the
design differences between the HTTP protocol and the most common high
performance I/O protocols, pointing out the main performance weaknesses of
HTTP. Then, we describe in detail how we solved these issues. Our solutions
have been implemented in a toolkit called davix, available through several
recent Linux distributions. Finally, we describe the results of our benchmarks
where we compare the performance of davix against a HPC specific protocol for a
data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho
Implicit Simulations using Messaging Protocols
A novel algorithm for performing parallel, distributed computer simulations
on the Internet using IP control messages is introduced. The algorithm employs
carefully constructed ICMP packets which enable the required computations to be
completed as part of the standard IP communication protocol. After providing a
detailed description of the algorithm, experimental applications in the areas
of stochastic neural networks and deterministic cellular automata are
discussed. As an example of the algorithms potential power, a simulation of a
deterministic cellular automaton involving 10^5 Internet connected devices was
performed.Comment: 14 pages, 3 figure
Developing a Python library for Estonian open data portal's API
https://www.ester.ee/record=b552007
Wormhole - An Active HTTP Tunnel
Browsing the world wide web over a high-latency network connection is frustrating. We propose an “active”
HTTP Tunnel, Wormhole, to significantly reduce webpage load times in such a scenario
- …