2,000 research outputs found

    A novel network architecture for train-to-wayside communication with quality of service over heterogeneous wireless networks

    Get PDF
    In the railway industry, there are nowadays different actors who would like to send or receive data from the wayside to an onboard device or vice versa. These actors are e.g., the Train Operation Company, the Train Constructing Company, a Content Provider, etc. This requires a communication module on each train and at the wayside. These modules interact with each other over heterogeneous wireless links. This system is referred to as the Train-to-Wayside Communication System (TWCS). While there are already a lot of deployments using a TWCS, the implementation of quality of service, performance enhancing proxies (PEP) and the network mobility functions have not yet been fully integrated in TWCS systems. Therefore, we propose a novel and modular IPv6-enabled TWCS architecture in this article. It jointly tackles these functions and considers their mutual dependencies and relationships. DiffServ is used to differentiate between service classes and priorities. Virtual local area networks are used to differentiate between different service level agreements. In the PEP, we propose to use a distributed TCP accelerator to optimize bandwidth usage. Concerning network mobility, we propose to use the SCTP protocol (with Dynamic Address Reconfiguration and PR-SCTP extensions) to create a tunnel per wireless link, in order to support the reliable transmission of data between the accelerators. We have analyzed different design choices, pinpointed the main implementation challenges and identified candidate solutions for the different modules in the TWCS system. As such, we present an elaborated framework that can be used for prototyping a fully featured TWCS

    A Priority-Based Admission Control Scheme for Commercial Web Servers

    Get PDF
    This paper investigates into the performance and load management of web servers that are deployed in commercial websites. Such websites offer various services such as flight/hotel booking, online banking, stock trading, and product purchases among others. Customers are increasingly relying on these round-the-clock services which are easier and (generally) cheaper to order. However, such an increasing number of customers’ requests makes a greater demand on the web servers. This leads to web servers’ overload and the consequential provisioning of inadequate level of service. This paper addresses these issues and proposes an admission control scheme which is based on the class-based priority scheme that classifies customer’s requests into different classes. The proposed scheme is formally specified using ΠΠ-calculus and is implemented as a Java-based prototype system. The prototype system is used to simulate the behaviour of commercial website servers and to evaluate their performance in terms of response time, throughput, arrival rate, and the percentage of dropped requests. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority requests but without causing adverse effects on low priority requests

    Feedback Control-based Database Connection Management for Proportional Delay Differentiation-enabled Web Application Servers

    Get PDF
    Abstract. As an important differentiated service model, proportional delay differentiation (PDD) aims to maintain the queuing delay ratio between different classes of requests or packets according to pre-specified parameters. This paper considers providing PDD service in web application servers through feedback control-based database connection management. To achieve this goal, an approximate linear time-invariant model of the database connection pool (DBCP) is identified experimentally and used to design a proportional-integral (PI) controller. Periodically the controller is invoked to calculate and adjust the probabilities for different classes of dynamic requests to use database connections, according to the error between the measured delay ratio and the reference value. Three kinds of workloads, which follow deterministic, uniform and heavy-tailed distributions respectively, are designed to evaluate the performance of the closed-loop system. Experiment results indicate that, the controller is effective in handling varying workloads, and PDD can be achieved in the DBCP even if the number of concurrent dynamic requests changes abruptly under different kinds of workloads

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    Revenue maximization problems in commercial data centers

    Get PDF
    PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco

    Project Extranets : b a strategic necessity or a tool for competitive advantage?

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.Vitae.Includes bibliographical references (leaves 78-80).An exploratory study was conducted to determine the strategic advantage that firms may gain by using project extranets on real estate development projects. Eight organizations were interviewed to determine their priorities, risk preferences, and needs regarding project communication technologies. Interviews were conducted with Corporate Owner/Occupiers, Owner/Non Occupiers, and Institutional Owner/Occupiers. The hypothesis tested was that owners and developers of real estate were looking to use project extranets to gain a competitive advantage. Research results indicated a resounding 'no' to our hypothesis. No owners or developers are currently looking at extranets as a source of competitive advantage at this time. However, the research data did provide insights into what is necessary for the technology to deliver for organizations to view a project extranet as a source of competitive advantage in the future. Owners were segmented into categories based on risk profile and needs regarding project extranets. Corporate Owner/Occupiers with real estate support needed assistance with predictability and execution. Corporate Owner/Occupiers of Manufacturing operations needed increases in speed. Institutional Owner/Occupiers needed certainty. Finally Owner/Non-Occupiers needed mitigation of market risks.by Ryan Carley [and] Matthew Robinson.S.M

    Quality-of-service management in IP networks

    Get PDF
    Quality of Service (QoS) in Internet Protocol (IF) Networks has been the subject of active research over the past two decades. Integrated Services (IntServ) and Differentiated Services (DiffServ) QoS architectures have emerged as proposed standards for resource allocation in IF Networks. These two QoS architectures support the need for multiple traffic queuing systems to allow for resource partitioning for heterogeneous applications making use of the networks. There have been a number of specifications or proposals for the number of traffic queuing classes (Class of Service (CoS)) that will support integrated services in IF Networks, but none has provided verification in the form of analytical or empirical investigation to prove that its specification or proposal will be optimum. Despite the existence of the two standard QoS architectures and the large volume of research work that has been carried out on IF QoS, its deployment still remains elusive in the Internet. This is not unconnected with the complexities associated with some aspects of the standard QoS architectures. [Continues.

    Effective Resource and Workload Management in Data Centers

    Get PDF
    The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement
    • …
    corecore