71 research outputs found
Migrating to IPv6 - The Role of Basic Coordination
The need for a larger Internet address space was acknowledged early on, and a solution (IPv6) standardized years ago. Its adoption has, however, been anything but easy and still faces significant challenges. The situation begs the questions of why has it been so difficult? and what could have been (or still be) done to facilitate this migration? There has been significant recent interest in those questions, and the paper builds on a line of work based on technology adoption models to explore them. The results confirm the impact of several known factors, but also provide new insight. In particular, they highlight the destabilizing effect of Internet Service Providers (ISPs) offering competing alternatives (to IPv6), and demonstrate the benefits of even minimum coordination among them in offering IPv6 as an option. The findings afford additional visibility into what affects technology transition in large systems with complex dependencies such as the Internet
Migrating the Internet to IPv6: An Exploration of the When and Why
The paper documents and to some extent elucidates the progress of IPv6 across major Internet stakeholders since its introduction in the mid 90’s. IPv6 offered an early solution to a well-understood and well-documented problem IPv4 was expected to encounter. In spite of early standardization and awareness of the issue, the Internet’s march to IPv6 has been anything but smooth, even if recent data point to an improvement. The paper documents this progression for several key Internet stakeholders using available measurement data, and identifies changes in the IPv6 ecosystem that may be in part responsible for how it has unfolded. The paper also develops a stylized model of IPv6 adoption across those stakeholders, and validates its qualitative predictive ability by comparing it to measurement data
Fostering IPv6 Migration Through Network Quality Differentials
Although IPv6 has been the next generation Internet protocol for nearly 15 years, new evidences indicate that transitioning from IPv4 to IPv6 is about to become a more pressing issue. This paper attempts to quantify if and how such a transition may unfold. The focus is on connectivity quality, e.g., as measured by users\u27 experience when accessing content, as a possible incentive (or disincentive) for migrating to IPv6, and on translation costs (between IPv6 and IPv4) that Internet Service Providers will incur during this transition. The paper develops a simple model that captures some of the underlying interactions, and highlights the ambiguous role of translation gateways that can either help or discourage IPv6 adoption. The paper is an initial foray in the complex and often puzzling issue of migrating the current Internet to a new version with which it is incompatible
Controlling the Growth of Internet Routing Tables Through Market Mechanisms
The growth of core Internet routing tables has been such that it is now viewed as an impediment to the continued expansion of the Internet. The main culprit is multi-homing that stems from sites\u27 desire for greater reliability and diversity in connectivity. These locally rational decisions have a global impact on the Internet, and there is currently no mechanism to effectively control them. A number of technical solutions are being pursued, but this paper explores the use of a market mechanism. It formulates a model that accounts for sites\u27 incentives and the impact their connectivity choices have on the size of routing tables, and introduces a pricing scheme that seeks to better reapportion the resulting costs. The model is solved for two configurations that capture different deployment realizations and stages. They demonstrate the scheme\u27s effectiveness in controlling the growth of Internet routing tables, while improving the welfare of sites and Internet Service Providers
Exploring User-Provided Connectivity
Network services often exhibit positive and negative externalities that affect users\u27 adoption decisions. One such service is user-provided connectivity or UPC. The service offers an alternative to traditional infrastructure-based communication services by allowing users to share their home base connectivity with other users, thereby increasing their access to connectivity. More users mean more connectivity alternatives, i.e., a positive externality, but also greater odds of having to share one\u27s own connectivity, i.e., a negative externality. The tug of war between positive and negative externalities together with the fact that they often depend not just on how many but also which users adopt, make it difficult to predict the service\u27s eventual success. Exploring this issue is the focus of this paper, which investigates not only when and why such services may be viable, but also explores how pricing can be used to effectively and practically realize them
The Impact of Reprovisioning on the Choice of Shared versus Dedicated Networks
As new network services emerge, questions about service deployment and network choices arise. Although shared networks, such as the Internet, offer many advantages, combining heterogeneous services on the same network need not be the right answer as it comes at the cost of increased complexity. Moreover, deploying new services on dedicated networks is becoming increasingly viable, thanks to virtualization technologies. In this work, we introduce an analytical framework that gives Internet Service Providers the ability to explore the trade-offs between shared and dedicated network infrastructures. The framework accounts for factors such as the presence of demand uncertainty for new services, (dis)economies of scope in deployment and operational costs, and the extent to which new technologies allow dynamic (re)provisioning of resources in response to excess demands. The main contribution is the identification and quantification of dynamic (re)provisioning as a key factor in determining the preferred network infrastructure, i.e. shared or dedicated
Progressive Neural Compression for Adaptive Image Offloading under Timing Constraints
IoT devices are increasingly the source of data for machine learning (ML)
applications running on edge servers. Data transmissions from devices to
servers are often over local wireless networks whose bandwidth is not just
limited but, more importantly, variable. Furthermore, in cyber-physical systems
interacting with the physical environment, image offloading is also commonly
subject to timing constraints. It is, therefore, important to develop an
adaptive approach that maximizes the inference performance of ML applications
under timing constraints and the resource constraints of IoT devices. In this
paper, we use image classification as our target application and propose
progressive neural compression (PNC) as an efficient solution to this problem.
Although neural compression has been used to compress images for different ML
applications, existing solutions often produce fixed-size outputs that are
unsuitable for timing-constrained offloading over variable bandwidth. To
address this limitation, we train a multi-objective rateless autoencoder that
optimizes for multiple compression rates via stochastic taildrop to create a
compression solution that produces features ordered according to their
importance to inference performance. Features are then transmitted in that
order based on available bandwidth, with classification ultimately performed
using the (sub)set of features received by the deadline. We demonstrate the
benefits of PNC over state-of-the-art neural compression approaches and
traditional compression methods on a testbed comprising an IoT device and an
edge server connected over a wireless network with varying bandwidth.Comment: IEEE the 44th Real-Time System Symposium (RTSS), 202
- …