61,286 research outputs found
Internet's Critical Path Horizon
Internet is known to display a highly heterogeneous structure and complex
fluctuations in its traffic dynamics. Congestion seems to be an inevitable
result of user's behavior coupled to the network dynamics and it effects should
be minimized by choosing appropriate routing strategies. But what are the
requirements of routing depth in order to optimize the traffic flow? In this
paper we analyse the behavior of Internet traffic with a topologically
realistic spatial structure as described in a previous study (S-H. Yook et al.
,Proc. Natl Acad. Sci. USA, {\bf 99} (2002) 13382). The model involves
self-regulation of packet generation and different levels of routing depth. It
is shown that it reproduces the relevant key, statistical features of
Internet's traffic. Moreover, we also report the existence of a critical path
horizon defining a transition from low-efficient traffic to highly efficient
flow. This transition is actually a direct consequence of the web's small world
architecture exploited by the routing algorithm. Once routing tables reach the
network diameter, the traffic experiences a sudden transition from a
low-efficient to a highly-efficient behavior. It is conjectured that routing
policies might have spontaneously reached such a compromise in a distributed
manner. Internet would thus be operating close to such critical path horizon.Comment: 8 pages, 8 figures. To appear in European Journal of Physics B (2004
Net neutrality discourses: comparing advocacy and regulatory arguments in the United States and the United Kingdom
Telecommunications policy issues rarely make news, much less mobilize thousands of people. Yet this has been occurring in the United States around efforts to introduce "Net neutrality" regulation. A similar grassroots mobilization has not developed in the United Kingdom or elsewhere in Europe. We develop a comparative analysis of U.S. and UK Net neutrality debates with an eye toward identifying the arguments for and against regulation, how those arguments differ between the countries, and what the implications of those differences are for the Internet. Drawing on mass media, advocacy, and regulatory discourses, we find that local regulatory precedents as well as cultural factors contribute to both agenda setting and framing of Net neutrality. The differences between national discourses provide a way to understand both the structural differences between regulatory cultures and the substantive differences between policy interpretations, both of which must be reconciled for the Internet to continue to thrive as a global medium
Net neutrality discourses: comparing advocacy and regulatory arguments in the United States and the United Kingdom
Telecommunications policy issues rarely make news, much less mobilize thousands of people. Yet this has been occurring in the United States around efforts to introduce "Net neutrality" regulation. A similar grassroots mobilization has not developed in the United Kingdom or elsewhere in Europe. We develop a comparative analysis of U.S. and UK Net neutrality debates with an eye toward identifying the arguments for and against regulation, how those arguments differ between the countries, and what the implications of those differences are for the Internet. Drawing on mass media, advocacy, and regulatory discourses, we find that local regulatory precedents as well as cultural factors contribute to both agenda setting and framing of Net neutrality. The differences between national discourses provide a way to understand both the structural differences between regulatory cultures and the substantive differences between policy interpretations, both of which must be reconciled for the Internet to continue to thrive as a global medium
On Time Synchronization Issues in Time-Sensitive Networks with Regulators and Nonideal Clocks
Flow reshaping is used in time-sensitive networks (as in the context of IEEE
TSN and IETF Detnet) in order to reduce burstiness inside the network and to
support the computation of guaranteed latency bounds. This is performed using
per-flow regulators (such as the Token Bucket Filter) or interleaved regulators
(as with IEEE TSN Asynchronous Traffic Shaping). Both types of regulators are
beneficial as they cancel the increase of burstiness due to multiplexing inside
the network. It was demonstrated, by using network calculus, that they do not
increase the worst-case latency. However, the properties of regulators were
established assuming that time is perfect in all network nodes. In reality,
nodes use local, imperfect clocks. Time-sensitive networks exist in two
flavours: (1) in non-synchronized networks, local clocks run independently at
every node and their deviations are not controlled and (2) in synchronized
networks, the deviations of local clocks are kept within very small bounds
using for example a synchronization protocol (such as PTP) or a satellite based
geo-positioning system (such as GPS). We revisit the properties of regulators
in both cases. In non-synchronized networks, we show that ignoring the timing
inaccuracies can lead to network instability due to unbounded delay in per-flow
or interleaved regulators. We propose and analyze two methods (rate and burst
cascade, and asynchronous dual arrival-curve method) for avoiding this problem.
In synchronized networks, we show that there is no instability with per-flow
regulators but, surprisingly, interleaved regulators can lead to instability.
To establish these results, we develop a new framework that captures industrial
requirements on clocks in both non-synchronized and synchronized networks, and
we develop a toolbox that extends network calculus to account for clock
imperfections.Comment: ACM SIGMETRICS 2020 Boston, Massachusetts, USA June 8-12, 202
Survey of End-to-End Mobile Network Measurement Testbeds, Tools, and Services
Mobile (cellular) networks enable innovation, but can also stifle it and lead
to user frustration when network performance falls below expectations. As
mobile networks become the predominant method of Internet access, developer,
research, network operator, and regulatory communities have taken an increased
interest in measuring end-to-end mobile network performance to, among other
goals, minimize negative impact on application responsiveness. In this survey
we examine current approaches to end-to-end mobile network performance
measurement, diagnosis, and application prototyping. We compare available tools
and their shortcomings with respect to the needs of researchers, developers,
regulators, and the public. We intend for this survey to provide a
comprehensive view of currently active efforts and some auspicious directions
for future work in mobile network measurement and mobile application
performance evaluation.Comment: Submitted to IEEE Communications Surveys and Tutorials. arXiv does
not format the URL references correctly. For a correctly formatted version of
this paper go to
http://www.cs.montana.edu/mwittie/publications/Goel14Survey.pd
FAIR: Forwarding Accountability for Internet Reputability
This paper presents FAIR, a forwarding accountability mechanism that
incentivizes ISPs to apply stricter security policies to their customers. The
Autonomous System (AS) of the receiver specifies a traffic profile that the
sender AS must adhere to. Transit ASes on the path mark packets. In case of
traffic profile violations, the marked packets are used as a proof of
misbehavior.
FAIR introduces low bandwidth overhead and requires no per-packet and no
per-flow state for forwarding. We describe integration with IP and demonstrate
a software switch running on commodity hardware that can switch packets at a
line rate of 120 Gbps, and can forward 140M minimum-sized packets per second,
limited by the hardware I/O subsystem.
Moreover, this paper proposes a "suspicious bit" for packet headers - an
application that builds on top of FAIR's proofs of misbehavior and flags
packets to warn other entities in the network.Comment: 16 pages, 12 figure
Characterizing and Improving the Reliability of Broadband Internet Access
In this paper, we empirically demonstrate the growing importance of
reliability by measuring its effect on user behavior. We present an approach
for broadband reliability characterization using data collected by many
emerging national initiatives to study broadband and apply it to the data
gathered by the Federal Communications Commission's Measuring Broadband America
project. Motivated by our findings, we present the design, implementation, and
evaluation of a practical approach for improving the reliability of broadband
Internet access with multihoming.Comment: 15 pages, 14 figures, 6 table
- …