8 research outputs found
Mode-Suppression: A Simple, Stable and Scalable Chunk-Sharing Algorithm for P2P Networks
The ability of a P2P network to scale its throughput up in proportion to the
arrival rate of peers has recently been shown to be crucially dependent on the
chunk sharing policy employed. Some policies can result in low frequencies of a
particular chunk, known as the missing chunk syndrome, which can dramatically
reduce throughput and lead to instability of the system. For instance, commonly
used policies that nominally "boost" the sharing of infrequent chunks such as
the well known rarest-first algorithm have been shown to be unstable. Recent
efforts have largely focused on the careful design of boosting policies to
mitigate this issue. We take a complementary viewpoint, and instead consider a
policy that simply prevents the sharing of the most frequent chunk(s).
Following terminology from statistics wherein the most frequent value in a data
set is called the mode, we refer to this policy as mode-suppression. We also
consider a more general version that suppresses the mode only if the mode
frequency is larger than the lowest frequency by a fixed threshold. We prove
the stability of mode-suppression using Lyapunov techniques, and use a Kingman
bound argument to show that the total download time does not increase with peer
arrival rate. We then design versions of mode-suppression that sample a small
number of peers at each time, and construct noisy mode estimates by aggregating
these samples over time. We show numerically that the variants of
mode-suppression yield near-optimal download times, and outperform all other
recently proposed chunk sharing algorithms
A New Stable Peer-to-Peer Protocol with Non-persistent Peers
Recent studies have suggested that the stability of peer-to-peer networks may
rely on persistent peers, who dwell on the network after they obtain the entire
file. In the absence of such peers, one piece becomes extremely rare in the
network, which leads to instability. Technological developments, however, are
poised to reduce the incidence of persistent peers, giving rise to a need for a
protocol that guarantees stability with non-persistent peers. We propose a
novel peer-to-peer protocol, the group suppression protocol, to ensure the
stability of peer-to-peer networks under the scenario that all the peers adopt
non-persistent behavior. Using a suitable Lyapunov potential function, the
group suppression protocol is proven to be stable when the file is broken into
two pieces, and detailed experiments demonstrate the stability of the protocol
for arbitrary number of pieces. We define and simulate a decentralized version
of this protocol for practical applications. Straightforward incorporation of
the group suppression protocol into BitTorrent while retaining most of
BitTorrent's core mechanisms is also presented. Subsequent simulations show
that under certain assumptions, BitTorrent with the official protocol cannot
escape from the missing piece syndrome, but BitTorrent with group suppression
does.Comment: There are only a couple of minor changes in this version. Simulation
tool is specified this time. Some repetitive figures are remove
Spatial Interactions of Peers and Performance of File Sharing Systems
We propose a new model for peer-to-peer networking which takes the network
bottlenecks into account beyond the access. This model allows one to cope with
key features of P2P networking like degree or locality constraints or the fact
that distant peers often have a smaller rate than nearby peers. We show that
the spatial point process describing peers in their steady state then exhibits
an interesting repulsion phenomenon. We analyze two asymptotic regimes of the
peer-to-peer network: the fluid regime and the hard--core regime. We get closed
form expressions for the mean (and in some cases the law) of the peer latency
and the download rate obtained by a peer as well as for the spatial density of
peers in the steady state of each regime, as well as an accurate approximation
that holds for all regimes. The analytical results are based on a mix of
mathematical analysis and dimensional analysis and have important design
implications. The first of them is the existence of a setting where the
equilibrium mean latency is a decreasing function of the load, a phenomenon
that we call super-scalability.Comment: No. RR-7713 (2012
Pricing and Equilibrium Analysis of Network Market Systems
Markets have been the most successful method of identifying value of goods and services.
Both large and small scale markets have gradually been moving into the Internet domain,
with increasingly large numbers of diverse participants. In this dissertation, we consider several
problems pertaining to equilibria in networked marketplaces under different application
scenarios and market sizes. We approach the question of pricing and market design from two
perspectives. On the one hand, we desire to understand how self-interested market participants would set prices and respond to prices resulting in certain allocations. On the other
hand, we wish to evaluate how best to allocate resources so as to attain efficient equilibria.
There might be a gap between these viewpoints, and characterizing this gap is desirable.
Our technical approaches follow the number of market participants, and the nature of
trades happening in the market. In our first problem, we consider a market of providing
communication services at the level of providing Internet transit. Here, the transit Internet
Service Provider (ISP) must determine billing volumes and set prices for its customers who
are _rms that are content providers, sinks, or subsidiary ISPs. Demand from these customers
is variable, and they have different impacts on the resources that the transit ISP needs to
provision. Using measured data from several networks, we design a fair and flexible billing
scheme that correctly identifies the impact of each customer on the amount of provisioning
needed.
While the customer set in the first problem is finite, many marketplaces deal with a very
large number of agents that each have ephemeral lifetimes. Here, agents arrive, participate in
the market for some time, and then vanish. We consider two such markets in such a regime.
The first is one of apps on mobile devices that compete against each other for cellular data
service, while the second is on service marketplaces wherein many providers compete with
each other for jobs that consider both prices and provider reputations while making choices
between them. Our goal is to show that a Mean Field Game can be used to accurately
approximate these systems, determine how prices are set, and characterize the nature of
equilibria in such markets.
Finally, we consider efficiency metrics in large scale resource sharing networks in which
bilateral exchange of resources is the norm. In particular, we consider peer-to-peer (P2P)
file sharing under which peers obtain chunks of a file from each other. Here, contrary to
the intuition that chunks must be shared whenever one peer has one of value to another, we
show that a measure of suppression is needed to utilize resources efficiently. In particular, we
propose a simple and stable algorithm entitled Mode suppression that attains near optimal
file sharing times by disallowing the sharing of the most frequent chunks in the system
Pricing and Equilibrium Analysis of Network Market Systems
Markets have been the most successful method of identifying value of goods and services.
Both large and small scale markets have gradually been moving into the Internet domain,
with increasingly large numbers of diverse participants. In this dissertation, we consider several
problems pertaining to equilibria in networked marketplaces under different application
scenarios and market sizes. We approach the question of pricing and market design from two
perspectives. On the one hand, we desire to understand how self-interested market participants would set prices and respond to prices resulting in certain allocations. On the other
hand, we wish to evaluate how best to allocate resources so as to attain efficient equilibria.
There might be a gap between these viewpoints, and characterizing this gap is desirable.
Our technical approaches follow the number of market participants, and the nature of
trades happening in the market. In our first problem, we consider a market of providing
communication services at the level of providing Internet transit. Here, the transit Internet
Service Provider (ISP) must determine billing volumes and set prices for its customers who
are _rms that are content providers, sinks, or subsidiary ISPs. Demand from these customers
is variable, and they have different impacts on the resources that the transit ISP needs to
provision. Using measured data from several networks, we design a fair and flexible billing
scheme that correctly identifies the impact of each customer on the amount of provisioning
needed.
While the customer set in the first problem is finite, many marketplaces deal with a very
large number of agents that each have ephemeral lifetimes. Here, agents arrive, participate in
the market for some time, and then vanish. We consider two such markets in such a regime.
The first is one of apps on mobile devices that compete against each other for cellular data
service, while the second is on service marketplaces wherein many providers compete with
each other for jobs that consider both prices and provider reputations while making choices
between them. Our goal is to show that a Mean Field Game can be used to accurately
approximate these systems, determine how prices are set, and characterize the nature of
equilibria in such markets.
Finally, we consider efficiency metrics in large scale resource sharing networks in which
bilateral exchange of resources is the norm. In particular, we consider peer-to-peer (P2P)
file sharing under which peers obtain chunks of a file from each other. Here, contrary to
the intuition that chunks must be shared whenever one peer has one of value to another, we
show that a measure of suppression is needed to utilize resources efficiently. In particular, we
propose a simple and stable algorithm entitled Mode suppression that attains near optimal
file sharing times by disallowing the sharing of the most frequent chunks in the system