9,754 research outputs found
Exploiting Traffic Balancing and Multicast Efficiency in Distributed Video-on-Demand Architectures
Distributed Video-on-Demand (DVoD) systems are proposed as a
solution to the limited streaming capacity and null scalability of centralized
systems. In a previous work, we proposed a fully distributed large-scale VoD
architecture, called Double P-Tree, which has shown itself to be a good approach
to the design of flexible and scalable DVoD systems. In this paper, we
present relevant design aspects related to video mapping and traffic balancing in
order to improve Double P-Tree architecture performance. Our simulation results
demonstrate that these techniques yield a more efficient system and considerably
increase its streaming capacity. The results also show the crucial importance
of topology connectivity in improving multicasting performance in
DVoD systems. Finally, a comparison among several DVoD architectures was
performed using simulation, and the results show that the Double P-Tree architecture
incorporating mapping and load balancing policies outperforms similar
DVoD architectures.This work was supported by the MCyT-Spain under contract TIC 2001-2592 and partially supported by the Generalitat de Catalunya- Grup de Recerca Consolidat 2001SGR-00218
Enabling scalability by partitioning virtual environments using frontier sets
We present a class of partitioning scheme that we have called frontier sets. Frontier sets build on the notion of a potentially visible set (PVS). In a PVS, a world is subdivided into cells and for each cell all the other cells that can be seen are computed. In contrast, a frontier set considers pairs of cells, A and B. For each pair, it lists two sets of cells (two frontiers), FAB and FBA. By definition, from no cell in FAB is any cell in FBA visible and vice versa.
Our initial use of frontier sets has been to enable scalability in distributed networking. This is possible because, for example, if at time t0 Player1 is in cell A and Player2 is in cell B, as long as they stay in their respective frontiers, they do not need to send update information to each other.
In this paper we describe two strategies for building frontier sets. Both strategies are dynamic and compute frontiers only as necessary at runtime. The first is distance-based frontiers. This strategy requires precomputation of an enhanced potentially visible set. The second is greedy frontiers. This strategy is more expensive to compute at runtime, however it leads to larger and thus more efficient frontiers.
Network simulations using code based on the Quake II engine show that frontiers have significant promise and may allow a new class of scalable peer-to-peer game infrastructures to emerge
LIBER's involvement in supporting digital preservation in member libraries
Digital curation and preservation represent new challenges for universities. LIBER
has invested considerable effort to engage with the new agendas of digital preservation
and digital curation. Through two successful phases of the LIFE project, LIBER
is breaking new ground in identifying innovative models for costing digital curation
and preservation. Through LIFE’s input into the US-UK Blue Ribbon Task Force on
Sustainable Digital Preservation and Access, LIBER is aligned with major international
work in the economics of digital preservation. In its emerging new strategy and
structures, LIBER will continue to make substantial contributions in this area, mindful
of the needs of European research libraries
Crux: Locality-Preserving Distributed Services
Distributed systems achieve scalability by distributing load across many
machines, but wide-area deployments can introduce worst-case response latencies
proportional to the network's diameter. Crux is a general framework to build
locality-preserving distributed systems, by transforming an existing scalable
distributed algorithm A into a new locality-preserving algorithm ALP, which
guarantees for any two clients u and v interacting via ALP that their
interactions exhibit worst-case response latencies proportional to the network
latency between u and v. Crux builds on compact-routing theory, but generalizes
these techniques beyond routing applications. Crux provides weak and strong
consistency flavors, and shows latency improvements for localized interactions
in both cases, specifically up to several orders of magnitude for
weakly-consistent Crux (from roughly 900ms to 1ms). We deployed on PlanetLab
locality-preserving versions of a Memcached distributed cache, a Bamboo
distributed hash table, and a Redis publish/subscribe. Our results indicate
that Crux is effective and applicable to a variety of existing distributed
algorithms.Comment: 11 figure
- …