113 research outputs found

    Enhancing QoS provisioning and granularity in next generation internet

    Get PDF
    Next Generation IP technology has the potential to prevail, both in the access and in the core networks, as we are moving towards a multi-service, multimedia and high-speed networking environment. Many new applications, including the multimedia applications, have been developed and deployed, and demand Quality of Service (QoS) support from the Internet, in addition to the current best effort service. Therefore, QoS provisioning techniques in the Internet to guarantee some specific QoS parameters are more a requirement than a desire. Due to the large amount of data flows and bandwidth demand, as well as the various QoS requirements, scalability and fine granularity in QoS provisioning are required. In this dissertation, the end-to-end QoS provisioning mechanisms are mainly studied, in order to provide scalable services with fine granularity to the users, so that both users and network service providers can achieve more benefits from the QoS provisioned in the network. To provide the end-to-end QoS guarantee, single-node QoS provisioning schemes have to be deployed at each router, and therefore, in this dissertation, such schemes are studied prior to the study of the end-to-end QoS provisioning mechanisms. Specifically, the effective sharing of the output bandwidth among the large amount of data flows is studied, so that fairness in the bandwidth allocation among the flows can be achieved in a scalable fashion. A dual-rate grouping architecture is proposed in this dissertation, in which the granularity in rate allocation can be enhanced, while the scalability of the one-rate grouping architecture is still maintained. It is demonstrated that the dual-rate grouping architecture approximates the ideal per-flow based PFQ architecture better than the one-rate grouping architecture, and provides better immunity capability. On the end-to-end QoS provisioning, a new Endpoint Admission Control scheme for Diffserv networks, referred to as Explicit Endpoint Admission Control (EEAC), is proposed, in which the admission control decision is made by the end hosts based on the end-to-end performance of the network. A novel concept, namely the service vector, is introduced, by which an end host can choose different services at different routers along its data path. Thus, the proposed service provisioning paradigm decouples the end-to-end QoS provisioning from the service provisioning at each router, and the end-to-end QoS granularity in the Diffserv networks can be enhanced, while the implementation complexity of the Diffserv model is maintained. Furthermore, several aspects of the implementation of the EEAC and service vector paradigm, referred to as EEAC-SV, in the Diffserv architecture are also investigated. The performance analysis and simulation results demonstrate that the proposed EEAC-SV scheme, not only increases the benefit to the service users, but also enhances the benefit to the network service provider in terms of network resource utilization. The study also indicates that the proposed EEAC-SV scheme can provide a compatible and friendly networking environment to the conventional TCP flows, and the scheme can be deployed in the current Internet in an incremental and gradual fashion

    Recent trends in IP/NGEO satellite communication systems: transport, routing, and mobility management concerns

    Get PDF
    η§‘η ”θ²»ε ±ε‘Šζ›ΈεŽιŒ²θ«–ζ–‡(θͺ²ι‘Œη•ͺ号:17500030/研穢代葨者:εŠ θ—€ε―§/γ‚€γƒ³γ‚ΏγƒΌγƒγƒƒγƒˆγ¨ι«˜θ¦ͺε’Œζ€§γ‚’ζœ‰γ™γ‚‹ζ¬‘δΈ–δ»£δ½Žθ»Œι“θ‘›ζ˜Ÿγƒγƒƒγƒˆγƒ―γƒΌγ‚―γ«ι–’γ™γ‚‹εŸΊη›€η ”η©Ά

    Methods of Congestion Control for Adaptive Continuous Media

    Get PDF
    Since the first exchange of data between machines in different locations in early 1960s, computer networks have grown exponentially with millions of people now using the Internet. With this, there has also been a rapid increase in different kinds of services offered over the World Wide Web from simple e-mails to streaming video. It is generally accepted that the commonly used protocol suite TCP/IP alone is not adequate for a number of modern applications with high bandwidth and minimal delay requirements. Many technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have to be capable of multi-service and will have to isolate different classes of traffic through bandwidth partitioning such that, for example, low priority best-effort traffic does not cause delay for high priority video traffic. However, this research identifies that even within a class there may be delays or losses due to congestion and the problem will require different solutions in different classes. The focus of this research is on the requirements of the adaptive continuous media class. These are traffic flows that require a good Quality of Service but are also able to adapt to the network conditions by accepting some degradation in quality. It is potentially the most flexible traffic class and therefore, one of the most useful types for an increasing number of applications. This thesis discusses the QoS requirements of adaptive continuous media and identifies an ideal feedback based control system that would be suitable for this class. A number of current methods of congestion control have been investigated and two methods that have been shown to be successful with data traffic have been evaluated to ascertain if they could be adapted for adaptive continuous media. A novel method of control based on percentile monitoring of the queue occupancy is then proposed and developed. Simulation results demonstrate that the percentile monitoring based method is more appropriate to this type of flow. The problem of congestion control at aggregating nodes of the network hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then considered. A unique method of pricing mean and variance is developed such that each individual flow is charged fairly for its contribution to the congestion

    Effective Resource and Workload Management in Data Centers

    Get PDF
    The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement

    Cross-layer energy-efficient schemes for multimedia content delivery in heterogeneous wireless networks

    Get PDF
    The wireless communication technology has been developed focusing on fulfilling the demand in various parts of human life. In many real-life cases, this demand directs to most types of commonly-used rich-media applications which – with diverse traffic patterns - often require high quality levels on the devices of wireless network users. Deliveries of applications with different patterns are accomplished using heterogeneous wireless networks using multiple types of wireless network structure simultaneously. Meanwhile, content deliveries with assuring quality involve increased energy consumption on wireless network devices and highly challenge their limited power resources. As a result, many efforts have been invested aiming at high-quality and energy-efficient rich-media content deliveries in the past years. The research work presented in the thesis focuses on developing energy-aware content delivery schemes in heterogeneous wireless networks. This thesis has four major contributions outlined below: 1. An energy-aware mesh router duty cycle management scheme (AOC-MAC) for high-quality video deliveries over wireless mesh networks. AOC-MAC manages the sleep-periods of mesh devices based on link-state communication condition, reducing their energy consumption by extending their sleep-periods. 2. An energy efficient routing algorithm (E-Mesh) for high-quality video deliveries over wireless mesh networks. E-Mesh evolves an innovative energy-aware OLSR-based routing algorithm by taking energy consumption, router position and network load into consideration. 3. An energy-aware multi-flow-based traffic load balancing scheme (eMTCP) for multi-path content delivery over heterogeneous wireless networks. The scheme makes use of the MPTCP protocol at the upper transport layer of network, allowing data streams to be delivered across multiple consequent paths. Meanwhile, this benefit of MPTCP is also balanced with energy consumption awareness by partially off-loading traffic from the paths with higher energy cost to others. 4. A MPTCP-based traffic-characteristic-aware load balancing mechanism (eMTCP-BT) for heterogeneous wireless networks. In eMTCP-BT, mobile applications are categorized according to burstiness level. eMTCP-BT increases the energy efficiency of the application content deliveries by performing a MDP-based distribution of traffic delivery via the available wireless network interfaces and paths based on the traffic burstiness level
    • …
    corecore