2,307 research outputs found
Techniques of data prefetching, replication, and consistency in the Internet
Internet has become a major infrastructure for information sharing in our daily life, and indispensable to critical and large applications in industry, government, business, and education. Internet bandwidth (or the network speed to transfer data) has been dramatically increased, however, the latency time (or the delay to physically access data) has been reduced in a much slower pace. The rich bandwidth and lagging latency can be effectively coped with in Internet systems by three data management techniques: caching, replication, and prefetching. The focus of this dissertation is to address the latency problem in Internet by utilizing the rich bandwidth and large storage capacity for efficiently prefetching data to significantly improve the Web content caching performance, by proposing and implementing scalable data consistency maintenance methods to handle Internet Web address caching in distributed name systems (DNS), and to handle massive data replications in peer-to-peer systems. While the DNS service is critical in Internet, peer-to-peer data sharing is being accepted as an important activity in Internet.;We have made three contributions in developing prefetching techniques. First, we have proposed an efficient data structure for maintaining Web access information, called popularity-based Prediction by Partial Matching (PB-PPM), where data are placed and replaced guided by popularity information of Web accesses, thus only important and useful information is stored. PB-PPM greatly reduces the required storage space, and improves the prediction accuracy. Second, a major weakness in existing Web servers is that prefetching activities are scheduled independently of dynamically changing server workloads. Without a proper control and coordination between the two kinds of activities, prefetching can negatively affect the Web services and degrade the Web access performance. to address this problem, we have developed a queuing model to characterize the interactions. Guided by the model, we have designed a coordination scheme that dynamically adjusts the prefetching aggressiveness in Web Servers. This scheme not only prevents the Web servers from being overloaded, but it can also minimize the average server response time. Finally, we have proposed a scheme that effectively coordinates the sharing of access information for both proxy and Web servers. With the support of this scheme, the accuracy of prefetching decisions is significantly improved.;Regarding data consistency support for Internet caching and data replications, we have conducted three significant studies. First, we have developed a consistency support technique to maintain the data consistency among the replicas in structured P2P networks. Based on Pastry, an existing and popular P2P system, we have implemented this scheme, and show that it can effectively maintain consistency while prevent hot-spot and node-failure problems. Second, we have designed and implemented a DNS cache update protocol, called DNScup, to provide strong consistency for domain/IP mappings. Finally, we have developed a dynamic lease scheme to timely update the replicas in Internet
Supporting Bandwidth Guarantee and Mobility for Real-Time Applications on Wireless LANs
The proliferation of IEEE 802.11-based wireless LANs opens up avenues for
creation of several tetherless and mobility oriented services. Most of these
services, like voice over WLAN, media streaming etc., generate delay and
bandwidth sensitive traffic. These traffic flows require undisrupted network
connectivity with some QoS guarantees. Unfortunately, there is no adequate
support built into these wireless LANs towards QoS provisioning. Further, the
network layer handoff latency incurred by mobile nodes in these wireless LANs
is too high for real-time applications to function properly. In this paper, we
describe a QoS mechanism, called Rether, to effectively support bandwidth
guarantee on wireless LANs. Rether is designed to support the current wireless
LAN technologies like 802.11b and 802.11a with a specific capability of being
tailored for QoS oriented technology like 802.11e. We also describe a
low-latency handoff mechanism which expedites network level handoff to provide
real-time applications with an added advantage of seamless mobility.Comment: This paper integrates the QoS scheme published in MMCN 2002 with a
low latency mobility scheme that appeared in IEEE JSAC May 2004. This paper
deals with both the issues with a fresh perspective of new networking
technologies and standards such as 802.11
Internet Protocol Version 6: Dead or Alive?
Internet Protocol (IP) is the narrow waist of multilayered Internet protocol
stack which defines the rules for data sent across networks. IPv4 is the fourth
version of IP and first commercially available for deployment set by ARPANET in
1983 which is a 32 bit long address and can support up to 232 devices. In April
2017, all Regional Internet Registries (RIRs) confirmed that IPv4 addresses are
exhausted and cannot be allocated anymore implying any new organization
requesting a block of Internet addresses will be allocated IPv6. This creates
troubles of interoperability, migration and deployment, and therefore
organizations hesitated to use IPv6 borrowing IPv4 addresses from other big
organizations instead. Currently, when IPv4 is not available, and IPv6 is not
adopted for around 20 years, the question arises whether IPv6 will still be
accepted by the computer society or will it have an end of life soon with
alternate better protocol such as ID based networks taking its place. This
paper claims that IPv6 has lost its deployment window and can be safely skipped
when new ID based protocols are available which not only have simple
interoperability, deployment and migration guidelines but also provide advanced
features as compared to IPv6. The paper provides answers to these questions
with a comprehensive comparison of IPv6 with its available alternatives and
reasons of IPv6 failures in its adoption. Finally, the paper declares IPv6 as a
dead protocol and suggests to use newer available protocols in future.Comment: 16:198:553 Rutgers CS Course Pape
Let Your CyberAlter Ego Share Information and Manage Spam
Almost all of us have multiple cyberspace identities, and these {\em
cyber}alter egos are networked together to form a vast cyberspace social
network. This network is distinct from the world-wide-web (WWW), which is being
queried and mined to the tune of billions of dollars everyday, and until
recently, has gone largely unexplored. Empirically, the cyberspace social
networks have been found to possess many of the same complex features that
characterize its real counterparts, including scale-free degree distributions,
low diameter, and extensive connectivity. We show that these topological
features make the latent networks particularly suitable for explorations and
management via local-only messaging protocols. {\em Cyber}alter egos can
communicate via their direct links (i.e., using only their own address books)
and set up a highly decentralized and scalable message passing network that can
allow large-scale sharing of information and data. As one particular example of
such collaborative systems, we provide a design of a spam filtering system, and
our large-scale simulations show that the system achieves a spam detection rate
close to 100%, while the false positive rate is kept around zero. This system
has several advantages over other recent proposals (i) It uses an already
existing network, created by the same social dynamics that govern our daily
lives, and no dedicated peer-to-peer (P2P) systems or centralized server-based
systems need be constructed; (ii) It utilizes a percolation search algorithm
that makes the query-generated traffic scalable; (iii) The network has a built
in trust system (just as in social networks) that can be used to thwart
malicious attacks; iv) It can be implemented right now as a plugin to popular
email programs, such as MS Outlook, Eudora, and Sendmail.Comment: 13 pages, 10 figure
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions
In recent years, the current Internet has experienced an unexpected paradigm
shift in the usage model, which has pushed researchers towards the design of
the Information-Centric Networking (ICN) paradigm as a possible replacement of
the existing architecture. Even though both Academia and Industry have
investigated the feasibility and effectiveness of ICN, achieving the complete
replacement of the Internet Protocol (IP) is a challenging task.
Some research groups have already addressed the coexistence by designing
their own architectures, but none of those is the final solution to move
towards the future Internet considering the unaltered state of the networking.
To design such architecture, the research community needs now a comprehensive
overview of the existing solutions that have so far addressed the coexistence.
The purpose of this paper is to reach this goal by providing the first
comprehensive survey and classification of the coexistence architectures
according to their features (i.e., deployment approach, deployment scenarios,
addressed coexistence requirements and architecture or technology used) and
evaluation parameters (i.e., challenges emerging during the deployment and the
runtime behaviour of an architecture). We believe that this paper will finally
fill the gap required for moving towards the design of the final coexistence
architecture.Comment: 23 pages, 16 figures, 3 table
Large-Scale Time-Shifted Streaming Delivery
An attractive new feature of connected TV systems consists in allowing users
to access past portions of the TV channel. This feature, called time-shifted
streaming, is now used by millions of TV viewers. We address in this paper the
design of a large-scale delivery system for time-shifted streaming. We
highlight the characteristics of time-shifted streaming that prevent known
video delivery systems to be used. Then, we present two proposals that meet the
demand for two radically different types of TV operator. First, the
Peer-Assisted Catch-Up Streaming system, namely PACUS, aims at reducing the
load on the server of a large TV broadcasters without losing the control of the
TV delivery. Second, the turntable structure, is an overlay of nodes that allow
an independent content delivery network or a small independent TV broadcaster
to ensure that all past TV programs are stored and as available as possible. We
show through extensive simulations that our objectives are reached, with a
reduction of up to three quarters of the traffic for PACUS and a 100\%
guaranteed availability for the turntable structure. We also compare our
proposals to the main previous works in the area
Living in a PIT-less World: A Case Against Stateful Forwarding in Content-Centric Networking
Information-Centric Networking (ICN) is a recent paradigm that claims to
mitigate some limitations of the current IP-based Internet architecture. The
centerpiece of ICN is named and addressable content, rather than hosts or
interfaces. Content-Centric Networking (CCN) is a prominent ICN instance that
shares the fundamental architectural design with its equally popular academic
sibling Named-Data Networking (NDN). CCN eschews source addresses and creates
one-time virtual circuits for every content request (called an interest). As an
interest is forwarded it creates state in intervening routers and the requested
content back is delivered over the reverse path using that state.
Although a stateful forwarding plane might be beneficial in terms of
efficiency, and resilience to certain types of attacks, this has not been
decisively proven via realistic experiments. Since keeping per-interest state
complicates router operations and makes the infrastructure susceptible to
router state exhaustion attacks (e.g., there is currently no effective defense
against interest flooding attacks), the value of the stateful forwarding plane
in CCN should be re-examined.
In this paper, we explore supposed benefits and various problems of the
stateful forwarding plane. We then argue that its benefits are uncertain at
best and it should not be a mandatory CCN feature. To this end, we propose a
new stateless architecture for CCN that provides nearly all functionality of
the stateful design without its headaches. We analyze performance and resource
requirements of the proposed architecture, via experiments.Comment: 10 pages, 6 figure
Towards Plugging Privacy Leaks in Domain Name System
Privacy leaks are an unfortunate and an integral part of the current Internet
domain name resolution. Each DNS query generated by a user reveals -- to one or
more DNS servers -- the origin and target of that query. Over time, a user's
browsing behavior might be exposed to entities with little or no trust. Current
DNS privacy leaks stem from fundamental DNS features and are not easily fixable
by simple patches. Moreover, privacy issues have been overlooked by DNS
security efforts (i.e. DNSSEC) and are thus likely to propagate into future
versions of DNS.
In order to mitigate privacy issues in current DNS, this paper proposes a
Privacy-Preserving Domain Name System (PPDNS), which maintains privacy during
domain name resolution. PPDNS is based on distributed hash tables (DHTs), an
alternative naming infrastructure, and computational private information
retrieval (cPIR), an advanced cryptographic construct. PPDNS takes advantage of
the DHT's index structure to improve name resolution query privacy, while
leveraging cPIR to reduce communication overhead for bandwidth-sensitive
clients. Our analysis shows that PPDNS is a viable approach for obtaining a
higher degree of privacy for name resolution queries. PPDNS also serves as a
demonstration of blending advanced systems techniques with their cryptographic
counterparts
Access Control Mechanisms in Named Data Networks: A Comprehensive Survey
Information-Centric Networking (ICN) has recently emerged as a prominent
candidate for the Future Internet Architecture (FIA) that addresses existing
issues with the host-centric communication model of the current TCP/IP-based
Internet. Named Data Networking (NDN) is one of the most recent and active ICN
architectures that provides a clean slate approach for Internet communication.
NDN provides intrinsic content security where security is directly provided to
the content instead of communication channel. Among other security aspects,
Access Control (AC) rules specify the privileges for the entities that can
access the content. In TCP/IP-based AC systems, due to the client-server
communication model, the servers control which client can access a particular
content. In contrast, ICN-based networks use content names to drive
communication and decouple the content from its original location. This
phenomenon leads to the loss of control over the content causing different
challenges for the realization of efficient AC mechanisms. To date,
considerable efforts have been made to develop various AC mechanisms in NDN. In
this paper, we provide a detailed and comprehensive survey of the AC mechanisms
in NDN. We follow a holistic approach towards AC in NDN where we first
summarize the ICN paradigm, describe the changes from channel-based security to
content-based security and highlight different cryptographic algorithms and
security protocols in NDN. We then classify the existing AC mechanisms into two
main categories: Encryption-based AC and Encryption-independent AC. Each
category has different classes based on the working principle of AC (e.g.,
Attribute-based AC, Name-based AC, Identity-based AC, etc). Finally, we present
the lessons learned from the existing AC mechanisms and identify the challenges
of NDN-based AC at large, highlighting future research directions for the
community.Comment: This paper has been accepted for publication by the ACM Computing
Surveys. The final version will be published by the AC
- …