706 research outputs found

    Load Balancing in Cloud Computing: A Survey on Popular Techniques and Comparative Analysis

    Get PDF
    Cloud Computing is universally accepted as the most intensifying field in web technologies today. With the increasing popularity of the cloud, popular website2019;s servers are getting overloaded with high request load by users. One of the main challenges in cloud computing is Load Balancing on servers. Load balancing is the procedure of sharing the load between multiple processors in a distributed environment to minimize the turnaround time taken by the servers to cater service requests and make better utilization of the available resources. It greatly helps in scenarios where there is misbalance of workload on the servers as some machines may get heavily loaded while others remain under-loaded or idle. Load balancing methods make sure that every VM or server in the network holds workload equilibrium and load as per their capacity at any instance of time. Static and Dynamic load balancing are main techniques for balancing load on servers. This paper presents a brief discussion on different load balancing schemes and comparison between prime techniques

    PVW: Designing Virtual World Server Infrastructure

    Get PDF
    This paper presents a high level overview of PVW (Partitioned Virtual Worlds), a distributed system architecture for the management of virtual worlds. PVW is designed to support arbitrarily large and complex virtual worlds while accommodating dynamic and highly variable user population and content distribution density. The PVW approach enables the task of simulating and managing the virtual world to be distributed over many servers by spatially partitioning the environment into a hierarchical structure. This structure is useful both for balancing the simulation load across many nodes, as well as features such as geometric simplification and distribution of dynamic content

    Minimization of cloud task execution length with workload prediction errors

    Get PDF
    In cloud systems, it is non-trivial to optimize task’s execution performance under user’s affordable budget, especially with possible workload prediction errors. Based on an optimal algorithm that can minimize cloud task’s execution length with predicted workload and budget, we theoretically derive the upper bound of the task execution length by taking into account the possible workload prediction errors. With such a state-of-the-art bound, the worst-case performance of a task execution with a certain workload prediction errors is predictable. On the other hand, we build a close-to-practice cloud prototype over a real cluster environment deployed with 56 virtual machines, and evaluate our solution with different resource contention degrees. Experiments show that task execution lengths under our solution with estimates of worst-case performance are close to their theoretical ideal values, in both non-competitive situation with adequate resources and the competitive situation with a certain limited available resources. We also observe a fair treatment on the resource allocation among all tasks.published_or_final_versio

    Taking It With You: Platform Barriers to Entry and the Limits of Data Portability

    Get PDF
    Policymakers are faced with a vexing problem: how to increase competition in a tech sector dominated by a few giants. One answer proposed and adopted by regulators in the United States and abroad is to require large platforms to allow consumers to move their data from one platform to another, an approach known as data portability. Facebook, Google, Apple, and other major tech companies have enthusiastically supported data portability through their own technical and political initiatives. Today, data portability has taken hold as one of the go-to solutions to address the tech industry’s competition concerns. This Article argues that despite the regulatory and industry alliance around data portability, today’s public and private data portability efforts are unlikely to meaningfully improve competition. This is because current portability efforts focus solely on mitigating switching costs, ignoring other barriers to entry that may preclude new platforms from entering the market. The technical implementations of data portability encouraged by existing regulation—namely one-off exports and API interoperability—address switching costs but not the barriers of network effects, unique data access, and economies of scale. This Article proposes a new approach to better alleviate these other barriers called collective portability, which would allow groups of users to coordinate to transfer data they share to a new platform, all at once. Although not a panacea, collective portability would provide a meaningful alternative to existing approaches while avoiding both the privacy/competitive utility trade off of one-off exports and the hard-to regulate power dynamics of APIs

    Dynamic computation migration in distributed shared memory systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Vita.Includes bibliographical references (p. 123-131).by Wilson Cheng-Yi Hsieh.Ph.D
    • …
    corecore