28 research outputs found

    Dynamic Time Windows and Generalized Virtual Clocks-Combined Closed-Loop/Open-Loop Mechanisms for Congestion Control of Data Traffic in High Speed Wide Area Networks

    Get PDF
    This paper presents a set of mechanisms for congestion control of data traffic in high speed wide area networks (HSWANs) along with preliminary performance results. The model of the network assumes reservation of resources based on average requirements. The mechanisms address (a) the different network time constants (short term and medium-term), (b) admission control that allows controlled variance of traffic as a function of medium-term congestion, and (c) prioritized scheduling which is based on a new fairness criterion. This latter criterion is perceived as the appropriate fairness measure for HSWANs. Preliminary performance studies show that the queue length statistics at switching nodes (mean, variance and max) are approximately proportional to the end-point \u27time window\u27 size. Further, * when network utilization approaches unity, the time window mechanism can protect the network from buffer overruns and excessive queueing delays, and * when network utilization level is smaller, the time window may be increased to allow a controlled amount of variance that attempts to simultaneously meet the performance goals of the end-user and that of the network. The prioritized scheduling algorithms proposed and studied in this paper are a generalization of the Virtual Clock algorithm [Zhang 1989]. The study here investigates * necessary and sufficient conditions for accomplishing desired fairness, * simulation and (limited analytical results for expected waiting times, * ability to protect against misbehaving users, and * relationship between end-point admission control (Time-Window) and internal scheduling (\u27Pulse\u27 and Virtual Clock) at the switch

    Scheduling for today’s computer systems: bridging theory and practice

    Get PDF
    Scheduling is a fundamental technique for improving performance in computer systems. From web servers to routers to operating systems, how the bottleneck device is scheduled has an enormous impact on the performance of the system as a whole. Given the immense literature studying scheduling, it is easy to think that we already understand enough about scheduling. But, modern computer system designs have highlighted a number of disconnects between traditional analytic results and the needs of system designers. In particular, the idealized policies, metrics, and models used by analytic researchers do not match the policies, metrics, and scenarios that appear in real systems. The goal of this thesis is to take a step towards modernizing the theory of scheduling in order to provide results that apply to today’s computer systems, and thus ease the burden on system designers. To accomplish this goal, we provide new results that help to bridge each of the disconnects mentioned above. We will move beyond the study of idealized policies by introducing a new analytic framework where the focus is on scheduling heuristics and techniques rather than individual policies. By moving beyond the study of individual policies, our results apply to the complex hybrid policies that are often used in practice. For example, our results enable designers to understand how the policies that favor small job sizes are affected by the fact that real systems only have estimates of job sizes. In addition, we move beyond the study of mean response time and provide results characterizing the distribution of response time and the fairness of scheduling policies. These results allow us to understand how scheduling affects QoS guarantees and whether favoring small job sizes results in large job sizes being treated unfairly. Finally, we move beyond the simplified models traditionally used in scheduling research and provide results characterizing the effectiveness of scheduling in multiserver systems and when users are interactive. These results allow us to answer questions about the how to design multiserver systems and how to choose a workload generator when evaluating new scheduling designs

    State dependent heuristic method of job shop scheduling

    Get PDF

    On the Modelling of the Mobile WiMAX (IEEE 802.16e) Uplink Scheduler

    Get PDF
    Packet scheduling has drawn a great deal of attention in the field of wireless networks as it plays an important role in distributing shared resources in a network. The process involves allocating the bandwidth among users and determining their transmission order. In this paper an uplink (UL) scheduling algorithm for the Mobile Worldwide Interoperability for Microwave Access (WiMAX) network based on the cyclic polling model is proposed. The model in this study consists of five queues (UGS, ertPS, rtPS, nrtPS, and BE) visited by a single server. A threshold policy is imposed to the nrtPS queue to ensure that the delay constraint of real time traffic (UGS, ertPS, and rtPS) is not violated making this approach original in comparison to the existing contributions. A mathematical model is formulated for the weighted sum of the mean waiting time of each individual queues based on the pseudo-conservation law. The results of the analysis are useful in obtaining or testing approximation for individual mean waiting time especially when queues are asymmetric (where each queue may have different stochastic characteristic such as arrival rate and service time distribution) and when their number is large (more than 2 queues)

    Departure operations at Boston Logan International Airport

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.Includes bibliographical references (p. 199-203).In order to support the development of improved methods for departure operations, the flow constraints and their causalities --primarily responsible for inefficiencies and delays-- need to be identified. This thesis is an effort to identify such flow constraints and gain a deep understanding of the departure process underlying dynamics based on field observations and analysis conducted at Boston Logan International Airport. It was observed that the departure process forms a complex interactive queuing system and is highly controlled by the air traffic controllers. Therefore, Flow constraints were identified with airport resources (runways, taxiways, ramp and gates) and with air traffic controllers due to their workload and control strategies. While departure delays were observed in all airport components, flow constraints manifested mainly at the runway system, where the longest delays and queues concentrated. Major delays and inefficiencies were also observed due to flow constraints at National Air Space locations downstream of the airport, which propagate back and block the departure flow from the airport. The air traffic controllers' main strategies in managing the traffic and dealing with the flow constraints were also identified.(cont.) Based on these observations, a core departure process abstraction was posed consisting of a queuing element (representing the delays) and a control element (representing the air traffic controller actions). The control element represents blocking the aircraft flow, to maintain safe airport operation according to Air Traffic Control procedures and to regulate the outbound flow to constrained downstream resources. Based on this physical abstraction, an analytical queuing framework was developed and used to analyze the departure process dynamics under three different scenarios: the overall process between pushback and takeoff, departure sub-processes between controller/pilot communication events and under downstream restrictions. Passing which results mainly from aircraft sequencing and their suspension under special circumstances (such as downstream restrictions) was used as a manifestation of the control behavior. It was observed that Logan Airport exhibits high uncertainty and limited sequencing, hindering the air traffic controllers' ability to efficiently manage the traffic and comply with restrictions. In conclusion, implications for improved methods for departure operations are inferred from the observations and analysis.by Husni Rifat Idris.Ph.D

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    Medium access control mechanisms for high speed metropolitan area networks

    Get PDF
    In this dissertation novel Medium Access Control mechanisms for High Speed Metropolitan Area networks are proposed and their performance is investigated under the presence of single and multiple priority classes of traffic. The proposed mechanisms are based on the Distributed Queue Dual Bus network, which has been adopted by the IEEE standardization committee as the 802.6 standard for Metropolitan Area Networks, and address most of its performance limitations. First, the Rotating Slot Generator scheme is introduced which uses the looped bus architecture that has been proposed for the 802.6 network. According to this scheme the responsibility for generating slots moves periodically from station to station around the loop. In this way, the positions of the stations relative to the slot generator change continuously, and therefore, there are no favorable locations on the busses. Then, two variations of a new bandwidth balancing mechanism, the NSW_BWB and ITU_NSW are introduced. Their main advantage is that their operation does not require the wastage of channel slots and for this reason they can converge very fast to the steady state, where the fair bandwidth allocation is achieved. Their performance and their ability to support multiple priority classes of traffic are thoroughly investigated. Analytic estimates for the stations\u27 throughputs and average segment delays are provided. Moreover, a novel, very effective priority mechanism is introduced which can guarantee almost immediate access for high priority traffic, regardless of the presence of lower priority traffic. Its performance is thoroughly investigated and its ability to support real time traffic, such as voice and video, is demonstrated. Finally, the performance under the presence of erasure nodes of the various mechanisms that have been proposed in this dissertation is examined and compared to the corresponding performance of the most prominent existing mechanisms

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    LBSim: A simulation system for dynamic load-balancing algorithms for distributed systems.

    Get PDF
    In a distributed system consisting of autonomous computational units, the total computational power of all the units needs to be utilized efficiently by applying suitable load-balancing policies. For accomplishing the task, a large number of load balancing algorithms have been proposed in the literature. To facilitate the performance study of each of these load-balancing strategies, simulation has been widely used. However comparison of the load balancing algorithms becomes difficult if a different simulator is used for each case. There have been few studies on generalized simulation of load-balancing algorithms in distributed systems. Most of the simulation systems address the experiments for some particular load-balancing algorithms, whereas this thesis aims to study the simulation for a broad range of algorithms. After the characterization of the distributed systems and the extraction of the common components of load-balancing algorithms, a simulation system, called LBSim, has been built. LBSim is a generalized event-driven simulator for studying load-balancing algorithms with coarse-grained applications running on distributed networks of autonomous processing nodes. In order to verify that the simulation model can represent actual systems reasonably well, we have validated LBSim both qualitatively and quantitatively. As a toolkit of simulation, LBSim programming libraries can be reused to implement load-balancing algorithms for the purpose of performance measurement and analysis from different perspectives. As a framework of algorithm simulation can be extended with a moderate effort by following object-oriented methodology, to meet any new requirements that may arise in the future.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .D8. Source: Masters Abstracts International, Volume: 43-05, page: 1747. Adviser: A. K. Aggarwal. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    Scalable and Accurate Memory System Simulation

    Get PDF
    Memory systems today possess more complexity than ever. On one hand, main memory technology has a much more diverse portfolio. Other than the mainstream DDR DRAMs, a variety of DRAM protocols have been proliferating in certain domains. Non-Volatile Memory(NVM) also finally has commodity main memory products, introducing more heterogeneity to the main memory media. On the other hand, the scale of computer systems, from personal computers, server computers, to high performance computing systems, has been growing in response to increasing computing demand. Memory systems have to be able to keep scaling to avoid bottlenecking the whole system. However, current memory simulation works cannot accurately or efficiently model these developments, making it hard for researchers and developers to evaluate or to optimize designs for memory systems. In this study, we attack these issues from multiple angles. First, we develop a fast and validated cycle accurate main memory simulator that can accurately model almost all existing DRAM protocols and some NVM protocols, and it can be easily extended to support upcoming protocols as well. We showcase this simulator by conducting a thorough characterization over existing DRAM protocols and provide insights on memory system designs. Secondly, to efficiently simulate the increasingly paralleled memory systems, we propose a lax synchronization model that allows efficient parallel DRAM simulation. We build the first ever practical parallel DRAM simulator that can speedup the simulation by up to a factor of three with single digit percentage loss in accuracy comparing to cycle accurate simulations. We also developed mitigation schemes to further improve the accuracy with no additional performance cost. Moreover, we discuss the limitation of cycle accurate models, and explore the possibility of alternative modeling of DRAM. We propose a novel approach that converts DRAM timing simulation into a classification problem. By doing so we can make predictions on DRAM latency for each memory request upon first sight, which makes it compatible for scalable architecture simulation frameworks. We developed prototypes based on various machine learning models and they demonstrate excellent performance and accuracy results that makes them a promising alternative to cycle accurate models. Finally, for large scale memory systems where data movement is often the performance limiting factor, we propose a set of interconnect topologies and implement them in a parallel discrete event simulation framework. We evaluate the proposed topologies through simulation and prove that their scalability and performance exceeds existing topologies with increasing system size or workloads
    corecore