4,122 research outputs found
Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library
Remote data access for data analysis in high performance computing is
commonly done with specialized data access protocols and storage systems. These
protocols are highly optimized for high throughput on very large datasets,
multi-streams, high availability, low latency and efficient parallel I/O. The
purpose of this paper is to describe how we have adapted a generic protocol,
the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative
for high performance I/O and data analysis applications in a global computing
grid: the Worldwide LHC Computing Grid. In this work, we first analyze the
design differences between the HTTP protocol and the most common high
performance I/O protocols, pointing out the main performance weaknesses of
HTTP. Then, we describe in detail how we solved these issues. Our solutions
have been implemented in a toolkit called davix, available through several
recent Linux distributions. Finally, we describe the results of our benchmarks
where we compare the performance of davix against a HPC specific protocol for a
data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho
Resilient availability and bandwidth-aware multipath provisioning for media transfer over the internet (Best Paper Award)
Traditional routing in the Internet is best-effort. Path differentiation including multipath routing is a promising technique to be used for meeting QoS requirements of media intensive applications. Since different paths have different characteristics in terms of latency, availability and bandwidth, they offer flexibility in QoS and congestion control. Additionally protection techniques can be used to enhance the reliability of the network.
This paper studies the problem of how to optimally find paths ensuring maximal bandwidth and resiliency of media transfer over the network. In particular, we propose two algorithms to reserve network paths with minimal new resources while increasing the availability of the paths and enabling congestion control. The first algorithm is based on Integer Linear Programming which minimizes the cost of the paths and the used resources. The second one is a heuristic-based algorithm which solves the scalability limitations of the ILP approach. The algorithms ensure resiliency against any single link failure in the network.
The experimental results indicate that using the proposed schemes the connections availability improve significantly and a more balanced load is achieved in the network compared to the shortest path-based approaches
Enabling Disaster Resilient 4G Mobile Communication Networks
The 4G Long Term Evolution (LTE) is the cellular technology expected to
outperform the previous generations and to some extent revolutionize the
experience of the users by taking advantage of the most advanced radio access
techniques (i.e. OFDMA, SC-FDMA, MIMO). However, the strong dependencies
between user equipments (UEs), base stations (eNBs) and the Evolved Packet Core
(EPC) limit the flexibility, manageability and resiliency in such networks. In
case the communication links between UEs-eNB or eNB-EPC are disrupted, UEs are
in fact unable to communicate. In this article, we reshape the 4G mobile
network to move towards more virtual and distributed architectures for
improving disaster resilience, drastically reducing the dependency between UEs,
eNBs and EPC. The contribution of this work is twofold. We firstly present the
Flexible Management Entity (FME), a distributed entity which leverages on
virtualized EPC functionalities in 4G cellular systems. Second, we introduce a
simple and novel device-todevice (D2D) communication scheme allowing the UEs in
physical proximity to communicate directly without resorting to the
coordination with an eNB.Comment: Submitted to IEEE Communications Magazin
- …