188,864 research outputs found

    On-line load balancing

    Get PDF
    AbstractThe setup for our problem consists of n servers that must complete a set of tasks. Each task can be handled only by a subset of the servers, requires a different level of service, and once assigned cannot be reassigned. We make the natural assumption that the level of service is known at arrival time, but that the duration of service is not. The on-line load balancing problem is to assign each task to an appropriate server in such a way that the maximum load on the servers is minimized. In this paper we derive matching upper and lower bounds for the competitive ratio of the on-line greedy algorithm for this problem, namely, [(3n)23/2](1+o(1)), and derive a lower bound, Ω(n12), for any other deterministic or randomized on-line algorithm

    On-Line Load Balancing with Task Buffer

    Get PDF
    On-line load balancing is one of the most important problems for applications with resource allocation. It aims to assign tasks to suitable machines and balance the load among all of the machines, where the tasks need to be assigned to a machine upon arrival. In practice, tasks are not always required to be assigned to machines immediately. In this paper, we propose a novel on-line load balancing model with task buffer, where the buffer can temporarily store tasks as many as possible. Three algorithms, namely LPTCP1_α, LPTCP2_α, and LPTCP3_β, are proposed based on the Longest Processing Time (LPT) algorithm and a variety of planarization algorithms. The planarization algorithms are proposed for reducing the difference among each element in a set. Experimental results show that our proposed algorithms can effectively solve the on-line load balancing problem and have good performance in large scale experiments

    A CASE STUDY ON IMPROVING THE PRODUCTIVITY USING IE TOOLS

    Get PDF
    Assembly line balancing has been a focus of interest in Industrial Engineering for the last few years. Assembly line balancing is the problem of assigning tasks to workstations by optimizing a performance measure while satisfying precedence relations between tasks and cycle time restrictions. Line balancing is an important feature in ensuring that a production line is efficient and producing at its optimum. The process of Line balancing attempts to equalize the work load on each workstation of the production line. Mixed model assembly lines are increasing in many industries to achieve the higher production rate. This study deals with mixed-model assembly line balancing and uses Yamazumi chart to break down the work element in to the value added & Non-value added part to reduce the waste & increase the productivity

    Islanded house operation using a micro CHP

    Get PDF
    The µCHP is expected as the successor of\ud the conventional high-efficiency boiler producing next to\ud heat also electricity with a comparable overall efficiency.\ud A µCHP appliance saves money and reduces greenhouse\ud gas emission.\ud An additional functionality of the µCHP is using the\ud appliance as a backupgenerator in case of a power outage.\ud The µCHPcould supply the essential loads, the heating and\ud reduce the discomfort up to a certain level. This requires\ud modifications on the µCHP appliance itself as well as on\ud the domestic electricity infrastructure. Furthermore some\ud extra hardware and a control algorithm for load balancing\ud are necessary.\ud Our load balancing algorithm is supposed to start and\ud stop the µCHP and switch off loads if necessary. The first\ud simulation results show that most of the electricity usage\ud is under the maximum generation line, but to reduce the\ud discomfort an electricity buffer is required.\u

    Penerapan Teknik Load Balancing pada Web Server Lokal dengan Metode Nth Menggunakan Mikrotik

    Full text link
      The development of web technologies lead to servers that provide network-based services on the local as well as the public should be able to cope with the demand and greater workloads than ever. To be able to meet the demands of the development of the web technology required load balancing technology. Load balancing role in dividing the burden on services either on a set of servers or network devices. In this research is implemented using the method of load balancing technology Nth on the router mikrotik by dividing two line interfaces. The test results indicate the existence of a balance of access at the time of the file from the web server on each client with the bandwidth that has been adapted by tital download speed on a client of 68 Kbps in accordance with predeterminded bandwidth that is 512 kbps.   Keyword : Bandwidth, Load balancing, Mikrotik, Nth, Web Server. &nbsp

    On the Load Balancing of Edge Computing Resources for On-Line Video Delivery

    Get PDF
    Online video broadcasting platforms are distributed, complex, cloud oriented, scalable, micro-service-based systems that are intended to provide over-the-top and live content to audience in scattered geographic locations. Due to the nature of cloud VM hosting costs, the subscribers are usually served under limited resources in order to minimize delivery budget. However, operations including transcoding require high-computational capacity and any disturbance in supplying requested demand might result in quality of experience (QoE) deterioration. For any online delivery deployment, understanding user's QoE plays a crucial role for rebalancing cloud resources. In this paper, a methodology for estimating QoE is provided for a scalable cloud-based online video platform. The model will provide an adeptness guideline regarding limited cloud resources and relate computational capacity, memory, transcoding and throughput capability, and finally latency competence of the cloud service to QoE. Scalability and efficiency of the system are optimized through reckoning sufficient number of VMs and containers to satisfy the user requests even on peak demand durations with minimum number of VMs. Both horizontal and vertical scaling strategies (including VM migration) are modeled to cover up availability and reliability of intermediate and edge content delivery network cache nodes

    Optimizing Cloud Computing Applications with a Data Center Load Balancing Algorithm

    Get PDF
    Delivering scalable and on-demand computing resources to users through the usage of the cloud has become a common paradigm. The issues of effective resource utilisation and application performance optimisation, however, become more pressing as the demand for cloud services rises. In order to ensure efficient resource allocation and improve application performance, load balancing techniques are essential in dispersing incoming network traffic over several servers. The workload balancing in the context of cloud computing, particularly in the Infrastructure as a Service (IaaS) model, continues to be difficult. Due to available virtual machines and the limited resources, efficient job allocation is essential. To prevent prolonged execution delays or machine breakdowns, cloud service providers must maintain excellent performance and avoid overloading or underloading hosts. The importance of task scheduling in load balancing necessitates compliance with Service Level Agreement (SLA) standards established by cloud developers for consumers. The suggested technique takes into account Quality of Service (QoS) job parameters, VM priorities, and resource allocation in order to maximise resource utilisation and improve load balancing. The proposed load balancing method is in line with the results in the body of existing literature by resolving these problems and the current research gap. According to experimental findings, the Dynamic LBA algorithm currently in use is outperformed by an average resource utilisation of 78%. The suggested algorithm also exhibits excellent performance in terms of accelerated Makespan and decreased execution time

    Packet Transactions: High-level Programming for Line-Rate Switches

    Full text link
    Many algorithms for congestion control, scheduling, network measurement, active queue management, security, and load balancing require custom processing of packets as they traverse the data plane of a network switch. To run at line rate, these data-plane algorithms must be in hardware. With today's switch hardware, algorithms cannot be changed, nor new algorithms installed, after a switch has been built. This paper shows how to program data-plane algorithms in a high-level language and compile those programs into low-level microcode that can run on emerging programmable line-rate switching chipsets. The key challenge is that these algorithms create and modify algorithmic state. The key idea to achieve line-rate programmability for stateful algorithms is the notion of a packet transaction : a sequential code block that is atomic and isolated from other such code blocks. We have developed this idea in Domino, a C-like imperative language to express data-plane algorithms. We show with many examples that Domino provides a convenient and natural way to express sophisticated data-plane algorithms, and show that these algorithms can be run at line rate with modest estimated die-area overhead.Comment: 16 page
    corecore