18,084 research outputs found

    Differentiated Predictive Fair Service for TCP Flows

    Full text link
    The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.National Science Foundation (CAREER ANI-0096045, MRI EIA-9871022

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Effect of temperature and genetic structure on adaptive evolution at a dynamic range edge in the North American gypsy moth (Lymantria dispar L.)

    Get PDF
    The study of biological invasions is not only essential to regulate their vast potential for ecological and economical harm, they offer a unique opportunity to study adaptive evolution in the context of recent range expansions into novel environments. The North American invasion of the gypsy moth, Lymantria dispar L., since its introduction in 1869 to Massachusetts, has expanded westward to Minnesota, northward to Canada, and southward to North Carolina. Fluctuating range dynamics at the southern invasive edge are heavily influenced by heat exposure over their optimal (supraoptimal) during the larval stage of development. We coupled genomic sequencing with reciprocal transplant and laboratory-rearing experiments to examine the interactions of phenotypic, genetic, and environmental variation under selective supraoptimal regimes. We demonstrate that while there is no evidence to support local adaptation in the fitness-related physiological traits we measured, there are clear genomic patterns of adaptation due to differential survival in higher temperatures. Mapping of loci identified as contributing to local adaptation in a selective environment and those associated with phenotypic variation highlighted that variation in larval development time is partly driven by pleiotropic loci also affecting survival. Overall, I highlight the necessity and inferential power gained through replicating environmental conditions using both phenotypic and genome-wide analyses

    Human Resource and Employment Practices in Telecommunications Services, 1980-1998

    Get PDF
    [Excerpt] In the academic literature on manufacturing, much research and debate have focused on whether firms are adopting some form of “high-performance” or “high-involvement” work organization based on such practices as employee participation, teams, and increased discretion, skills, and training for frontline workers (Ichniowski et al., 1996; Kochan and Osterman, 1994; MacDuffie, 1995). Whereas many firms in the telecommunications industry flirted with these ideas in the 1980s, they did not prove to be a lasting source of inspiration for the redesign of work and employment practices. Rather, work restructuring in telecommunications services has been driven by the ability of firms to leverage network and information technologies to reduce labor costs and create customer segmentation strategies. “Good jobs” versus “bad jobs,” or higher versus lower wage jobs, do not vary according to whether firms adopt a high- involvement model. They vary along two other dimensions: (1) within firms and occupations, by the value-added of the customer segment that an employee group serves; and (2) across firms, by union and nonunion status. We believe that this customer segmentation strategy is becoming a more general model for employment practices in large-scale service | operations; telecommunications services firms may be somewhat more | advanced than other service firms in adopting this strategy because of certain unique industry characteristics. The scale economies of network technology are such that once a company builds the network infrastructure to a customer’s specifications, the cost of additional services is essentially zero. As a result, and notwithstanding technological uncertainty, all of the industry’s major players are attempting to take advantage of system economies inherent in the nature of the product market and technology to provide customized packages of multimedia products to identified market segments. They have organized into market-driven business units providing differentiated services to large businesses and institutions, small businesses, and residential customers. They have used information technologies and process reengineering to customize specific services to different segments according to customer needs and ability to pay. Variation in work and employment practices, or labor market segmentation, follows product market segmentation. As a result, much of the variation in employment practices in this industry is within firms and within occupations according to market segment rather than across firms. In addition, despite market deregulation beginning in 1984 and opportunities for new entrants, a tightly led oligopoly structure is replacing the regulated Bell System monopoly. Former Bell System companies, the giants of the regulated period, continue to dominate market share in the post-1984 period. Older players and new entrants alike are merging and consolidating in order to have access to multimedia markets. What is striking in this industry, therefore, is the relative lack of variation in management and employment practices across firms after more than a decade of experience with deregulation. We attribute this lack of variation to three major sources. (1) Technological advances and network economics provide incentives for mergers, organizational consolidation, and, as indicated above, similar business strategies. (2) The former Bell System companies have deep institutional ties, and they continue to benchmark against and imitate each other so that ideas about restructuring have diffused quickly among them. (3) Despite overall deunionization in the industry, they continue to have high unionization rates; de facto pattern bargaining within the Bell system has remained quite strong. Therefore, similar employment practices based on inherited collective bargaining agreements continue to exist across former Bell System firms

    ABE: providing a low-delay service within best effort

    Get PDF
    Alternative best effort (ABE) is a novel service for IP networks, which relies on the idea of providing low delay at the expense of possibly less throughput. The objective is to retain the simplicity of the original Internet single-class best-effort service while providing low delay to interactive adaptive applications
    • …
    corecore