26 research outputs found

    A testbed for developing, simulating and experimenting multipath aggregation algorithms

    Get PDF
    Today, electronic devices may have multiple possibilities to communicate, either through wired or wireless interfaces. Despite this diversity, devices still fail to fully use the available resources by not simultaneously using multiple channels to their full extent. This is especially true in wireless channels where the efficient aggregation of multiple channels has proved to be a difficult task, as shown in recent simulation based works. In this Work In Progress paper, we present a testbed suitable to the evaluation of aggregation algorithms under real network environments. The proposed testbed aims to simulate and experiment both existing and new aggregation algorithms, optimized for wireless heterogeneous communication channels that can be deployed in industrial environments. In order to illustrate the merits of the proposed testbed, we also describe its use in the performance assessment of two aggregation algorithms: Linux Bonding Driver and Multipath TCP

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported

    Quality-oriented adaptation scheme for multimedia streaming in local broadband multi-service IP networks

    Get PDF
    The research reported in this thesis proposes, designs and tests the Quality-Oriented Adaptation Scheme (QOAS), an application-level adaptive scheme that offers high quality multimedia services to home residences and business premises via local broadband IP-networks in the presence of other traffic of different types. QOAS uses a novel client-located grading scheme that maps some network-related parametersā€™ values, variations and variation patterns (e.g. delay, jitter, loss rate) to application-level scores that describe the quality of delivery. This grading scheme also involves an objective metric that estimates the end-user perceived quality, increasing its effectiveness. A server-located arbiter takes content and rate adaptation decisions based on these quality scores, which is the only information sent via feedback by the clients. QOAS has been modelled, implemented and tested through simulations and an instantiation of it has been realized in a prototype system. The performance was assessed in terms of estimated end-user perceived quality, network utilisation, loss rate and number of customers served by a fixed infrastructure. The influence of variations in the parameters used by QOAS and of the networkrelated characteristics was studied. The schemeā€™s adaptive reaction was tested with background traffic of different type, size and variation patterns and in the presence of concurrent multimedia streaming processes subject to user-interactions. The results show that the performance of QOAS was very close to that of an ideal adaptive scheme. In comparison with other adaptive schemes QOAS allows for a significant increase in the number of simultaneous users while maintaining a good end-user perceived quality. These results are verified by a set of subjective tests that have been performed on viewers using a prototype system

    Congestion detection within multi-service TCP/IP networks using wavelets.

    Get PDF
    Using passive observation within the multi-service TCP/IP networking domain, we have developed a methodology that associates the frequency composition of composite traffic signals with the packet transmission mechanisms of TCP. At the core of our design is the Discrete Wavelet Transform (DWT), used to temporally localise the frequency variations of a signal. Our design exploits transmission mechanisms (including Fast Retransmit/Fast Recovery, Congestion Avoidance, Slow start, and Retransmission Timer Expiry with Exponential Back off.) that are activated in response to changes within this type of network environment. Manipulation of DWT output, combined with the use of novel heuristics permits shifts in the frequency spectrum of composite traffic signals to be directly associated with the former. Our methodology can be adapted to accommodate composite traffic signals that contain a substantial proportion of data originating from non-rate adaptive sources often associated with Long Range Dependence and Self Similarity (e.g. Pareto sources). We demonstrate the methodology in two ways. Firstly, it is used to design a congestion indicator tool that can operate with network control mechanisms that dissipate congestion. Secondly, using a queue management algorithm (Random Early Detection) as a candidate protocol, we show how our methodology can be adapted to produce a performance-monitoring tool. Our approach provides a solution that has both low operational and implementation intrusiveness with respect to existing network infrastructure. The methodology requires a single parameter (i.e. the arrival rate of traffic at a network node), which can be extracted from almost all network-forwarding devices. This simplifies implementation. Our study was performed within the context of fault management with design requirements and constraints arising from an in depth study of the Fault Management Systems (FMS) used by British Telecomm on regional UK networks up to February 2000

    The emergence of the mobile internet in Japan and the UK: platforms, exchange models, and innovation 1999ā€2011

    Get PDF
    In 1999 Japanese mobile operator NTT DoCoMo launched arguably the worldā€™s first successful mobile Internet services portal called ā€œiā€modeā€. In Europe at the same time a series of failures diminished the opportunities to attract customers to the mobile Internet. Even though similar Internet technologies were available in Japan and the UK, very different markets for services developed during the initial years 1999ā€2003. When the West expected Japanese firms to become dominant players in the mobile digitalisation of services during the introduction of 3G networks, it remained instead a national affair. The dominant views of how markets for mobile services operated seemed flawed.Ā Ā  Soā€called delivery platforms were used to connect mobile phones with service contents that were often adapted from the PC world. Designing and operating service delivery platforms became a new niche market. It held a pivotal role for the output of services and competition among providers.Ā Ā  This thesis sets out to answer a set of interā€related questions: How and where did firms innovate in this new and growing part of the service economy and how are new business models mediated by service delivery platforms? It argues that innovation in the digitalised economy is largely influenced by firms achieving platform leadership through coordination of both technological systems and the creation of multiā€sided exchanges. This thesis demonstrates from cases of multiā€sided markets in operatorā€controlled portals, of mobile video and TV and of event ticketing in Japan and the UK that defining the scope of the firm on the network level forms the basis for incremental innovation, the dominant form of service innovation. A parallel focus on coordinating platform technology choices forms the basis for firms to trade fees, advertisements, and user data, enabling control over profitable parts of multiā€sided value networks
    corecore