1,430 research outputs found
Content aware services using edge to edge overlays
Issued as final reportMotorola, inc
The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena
The Internet is the most complex system ever created in human history.
Therefore, its dynamics and traffic unsurprisingly take on a rich variety of
complex dynamics, self-organization, and other phenomena that have been
researched for years. This paper is a review of the complex dynamics of
Internet traffic. Departing from normal treatises, we will take a view from
both the network engineering and physics perspectives showing the strengths and
weaknesses as well as insights of both. In addition, many less covered
phenomena such as traffic oscillations, large-scale effects of worm traffic,
and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex
System
Recommended from our members
Flexible cross layer design for improved quality of service in MANETs
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityMobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack.
The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics.
The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented.
Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.National Foundation of Scholarships of Greece(I.K.Y.
Delay-oriented active queue management in TCP/IP networks
PhDInternet-based applications and services are pervading everyday life. Moreover, the growing
popularity of real-time, time-critical and mission-critical applications set new challenges to
the Internet community. The requirement for reducing response time, and therefore latency
control is increasingly emphasized.
This thesis seeks to reduce queueing delay through active queue management. While
mathematical studies and research simulations reveal that complex trade-off relationships
exist among performance indices such as throughput, packet loss ratio and delay, etc., this
thesis intends to find an improved active queue management algorithm which emphasizes
delay control without trading much on other performance indices such as throughput and
packet loss ratio.
The thesis observes that in TCP/IP network, packet loss ratio is a major reflection of
congestion severity or load. With a properly functioning active queue management algorithm,
traffic load will in general push the feedback system to an equilibrium point in terms of
packet loss ratio and throughput. On the other hand, queue length is a determinant factor on
system delay performance while has only a slight influence on the equilibrium. This
observation suggests the possibility of reducing delay while maintaining throughput and
packet loss ratio relatively unchanged.
The thesis also observes that queue length fluctuation is a reflection of both load changes and
natural fluctuation in arriving bit rate. Monitoring queue length fluctuation alone cannot
distinguish the difference and identify congestion status; and yet identifying this difference is
crucial in finding out situations where average queue size and hence queueing delay can be
properly controlled and reasonably reduced. However, many existing active queue
management algorithms only monitor queue length, and their control policies are solely
based on this measurement. In our studies, our novel finding is that the arriving bit rate
distribution of all sources contains information which can be a better indication of
congestion status and has a correlation with traffic burstiness. And this thesis develops a
simple and scalable way to measure its two most important characteristics, namely the mean
ii
and the variance of the arriving rate distribution. The measuring mechanism is based on a
Zombie List mechanism originally proposed and deployed in Stabilized RED to estimate the
number of flows and identify misbehaving flows. This thesis modifies the original zombie
list measuring mechanism, makes it capable of measuring additional variables. Based on
these additional measurements, this thesis proposes a novel modification to the RED
algorithm. It utilizes a robust adaptive mechanism to ensure that the system reaches proper
equilibrium operating points in terms of packet loss ratio and queueing delay under various
loads. Furthermore, it identifies different congestion status where traffic is less bursty and
adapts RED parameters in order to reduce average queue size and hence queueing delay
accordingly.
Using ns-2 simulation platform, this thesis runs simulations of a single bottleneck link
scenario which represents an important and popular application scenario such as home
access network or SoHo. Simulation results indicate that there are complex trade-off
relationships among throughput, packet loss ratio and delay; and in these relationships delay
can be substantially reduced whereas trade-offs on throughput and packet loss ratio are
negligible. Simulation results show that our proposed active queue management algorithm
can identify circumstances where traffic is less bursty and actively reduce queueing delay
with hardly noticeable sacrifice on throughput and packet loss ratio performances.
In conclusion, our novel approach enables the application of adaptive techniques to more
RED parameters including those affecting queue occupancy and hence queueing delay. The
new modification to RED algorithm is a scalable approach and does not introduce additional
protocol overhead. In general it brings the benefit of substantially reduced delay at the cost
of limited processing overhead and negligible degradation in throughput and packet loss
ratio. However, our new algorithm is only tested on responsive flows and a single bottleneck
scenario. Its effectiveness on a combination of responsive and non-responsive flows as well
as in more complicated network topology scenarios is left for future work
Recommended from our members
Data Management and Wireless Transport for Large Scale Sensor Networks
Today many large scale sensor networks have emerged, which span many different sensing applications. Each of these sensor networks often consists of millions of sensors collecting data and supports thousands of users with diverse data needs. Between users and wireless sensors there are often a group of powerful servers that collect and process data from sensors and answer users\u27 requests. To build such a large scale sensor network, we have to answer two fundamental research problems: i) what data to transmit from sensors to servers? ii) how to transmit the data over wireless links? Wireless sensors often can not transmit all collected data due to energy and bandwidth constraints. Therefore sensors need to decide what data to transmit to best satisfy users\u27 data requests. Sensor network users can often tolerate some data errors, thus sensors may transmit data in lower fidelity but still satisfy users\u27 requests. There are generally two types of requests-raw data requests and meta-data requests. To answer users\u27 raw data requests, we propose a model-driven data collection approach, PRESTO. PRESTO splits intelligence between sensors and servers, i.e., resource-rich servers perform expensive model training and resource-poor sensors perform simple model evaluation. PRESTO can significantly reduce data to be transmitted without sacrificing service quality. To answer users\u27 meta-data request, we propose a utility-driven multi-user data sharing approach, MUDS. MUDS uses utility function to unify diverse meta-data metrics. Sensors estimate utility value of each data packet and sends packets with highest utility first to improve overall system utility. After deciding what data to transmit from sensors to servers, the next question is how to transmit these data over wireless links. Wireless transport often suffers low bandwidth and unstable connectivity. In order to improve wireless transport, I propose a clean-slate re-design of wireless transport, Hop. Hop uses reliable perhop block transfer as a building block and builds all other components including hidden-terminal avoidance, congestion avoidance, and end-to-end reliability on top of it. Hop is built based on three key ideas: a) hop-by-hop transfer adapts to the lossy and highly variable nature of wireless channel significantly better than end-to-end transfer, b) the use of blocks as the unit of control is more efficient over wireless links than the use of packets, and c) the duplicated functionality in different layers in the network stack should be removed to simplify the protocol and avoid complex interaction
Resource dimensioning through buffer sampling
Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to , where directly relates to the performance requirement (as agreed upon in a service level agreement) and reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud
As estimating is relatively straightforward, the remaining open issue concerns the estimation of . We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate
Design and Implementation of an Architectural Framework for Web Portals in a Ubiquitous Pervasive Environment
Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal’s gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world’s largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework
- …