573 research outputs found

    Dynamically adaptive partition-based interest management in distributed simulation

    Get PDF
    Performance and scalability of distributed simulations depends primarily on the effectiveness of the employed interest management (IM) schema that aims at reducing the overall computational and messaging effort on the shared data to a necessary minimum. Existing IM approaches, which are based on variations or combinations of two principle data distribution techniques, namely region-based and grid-based techniques, perform poorly if the simulation develops an overloaded host. In order to facilitate distributing the processing load from overloaded areas of the shared data to less loaded hosts, the partition-based technique is introduced that allows for variable-size partitioning the shared data. Based on this data distribution technique, an IM approach is sketched that is dynamically adaptive to access latencies of simulation objects on the shared data as well as to the physical location of the objects. Since this re-distribution is decided depending on the messaging effort of the simulation objects for updating data partitions, any load balanced constellation has the additional advantage to be of minimal overall messaging effort. Hence, the IM schema dynamically resolves messaging overloading as well as overloading of hosts with simulation objects and therefore facilitates dynamic system scalability

    Prediction-based virtual instance migration for balanced workload in the cloud datacenters

    Get PDF
    Datacenters in the cloud today provide virtualized resources of CPU, memory, disk, and networks so that millions of users can use the services at the same time in an efficient and scalable way. One of the major challenges in these datacenters is load balancing and shifting. As a huge number of requests are sent to a particular datacenter or a group of servers are asked to process more than their fair share, some of the servers are overloaded, slowed down, hot spots are created, and even hardware failures may occur. This unbalanced load in the end deteriorates the performance of the entire system easily. In this paper, we propose a load balancer that aims at alleviating hot spots and distributing the load from overloaded servers to underutilized servers. Our load balancer monitors the loads of the servers, detects indications of overloading, then migrates virtual instances from overloaded servers to target servers. We have implemented the load balancer in a real system using the Xen hypervisor. We have also conducted an event-driven simulation to evaluate the performance of our system on a large-scale. Our results indicate that our reactive-predictive load balancing algorithm helps balance load among servers in the cloud as much as the best-case scenario from the exhaustive search with much less overhead

    Optimal and scalable management of smart power grids with electric vehicles

    Get PDF

    RA2: predicting simulation execution time for cloud-based design space explorations

    Get PDF
    Design space exploration refers to the evaluation of implementation alternatives for many engineering and design problems. A popular exploration approach is to run a large number of simulations of the actual system with varying sets of configuration parameters to search for the optimal ones. Due to the potentially huge resource requirements, cloud-based simulation execution strategies should be considered in many cases. In this paper, we look at the issue of running large-scale simulation-based design space exploration problems on commercial Infrastructure-as-a-Service clouds, namely Amazon EC2, Microsoft Azure and Google Compute Engine. To efficiently manage cloud resources used for execution, the key problem would be to accurately predict the running time for each simulation instance in advance. This is not trivial due to the currently wide range of cloud resource types which offer varying levels of performance. In addition, the widespread use of virtualization techniques in most cloud providers often introduces unpredictable performance interference. In this paper, we propose a resource and application-aware (RA2) prediction approach to combat performance variability on clouds. In particular, we employ neural network based techniques coupled with non-intrusive monitoring of resource availability to obtain more accurate predictions. We conducted extensive experiments on commercial cloud platforms using an evacuation planning design problem over a month-long period. The results demonstrate that it is possible to predict simulation execution times in most cases with high accuracy. The experiments also provide some interesting insights on how we should run similar simulation problems on various commercially available clouds

    RA2: predicting simulation execution time for cloud-based design space explorations

    Get PDF
    Design space exploration refers to the evaluation of implementation alternatives for many engineering and design problems. A popular exploration approach is to run a large number of simulations of the actual system with varying sets of configuration parameters to search for the optimal ones. Due to the potentially huge resource requirements, cloud-based simulation execution strategies should be considered in many cases. In this paper, we look at the issue of running large-scale simulation-based design space exploration problems on commercial Infrastructure-as-a-Service clouds, namely Amazon EC2, Microsoft Azure and Google Compute Engine. To efficiently manage cloud resources used for execution, the key problem would be to accurately predict the running time for each simulation instance in advance. This is not trivial due to the currently wide range of cloud resource types which offer varying levels of performance. In addition, the widespread use of virtualization techniques in most cloud providers often introduces unpredictable performance interference. In this paper, we propose a resource and application-aware (RA2) prediction approach to combat performance variability on clouds. In particular, we employ neural network based techniques coupled with non-intrusive monitoring of resource availability to obtain more accurate predictions. We conducted extensive experiments on commercial cloud platforms using an evacuation planning design problem over a month-long period. The results demonstrate that it is possible to predict simulation execution times in most cases with high accuracy. The experiments also provide some interesting insights on how we should run similar simulation problems on various commercially available clouds

    A study of event traffic during the shared manipulation of objects within a collaborative virtual environment

    Get PDF
    Event management must balance consistency and responsiveness above the requirements of shared object interaction within a Collaborative Virtual Environment (CVE) system. An understanding of the event traffic during collaborative tasks helps in the design of all aspects of a CVE system. The application, user activity, the display interface, and the network resources, all play a part in determining the characteristics of event management. Linked cubic displays lend themselves well to supporting natural social human communication between remote users. To allow users to communicate naturally and subconsciously, continuous and detailed tracking is necessary. This, however, is hard to balance with the real-time consistency constraints of general shared object interaction. This paper aims to explain these issues through a detailed examination of event traffic produced by a typical CVE, using both immersive and desktop displays, while supporting a variety of collaborative activities. We analyze event traffic during a highly collaborative task requiring various forms of shared object manipulation, including the concurrent manipulation of a shared object. Event sources are categorized and the influence of the form of object sharing as well as the display device interface are detailed. With the presented findings the paper wishes to aid the design of future systems

    An Architectural Framework for Performance Analysis: Supporting the Design, Configuration, and Control of DIS /HLA Simulations

    Get PDF
    Technology advances are providing greater capabilities for most distributed computing environments. However, the advances in capabilities are paralleled by progressively increasing amounts of system complexity. In many instances, this complexity can lead to a lack of understanding regarding bottlenecks in run-time performance of distributed applications. This is especially true in the domain of distributed simulations where a myriad of enabling technologies are used as building blocks to provide large-scale, geographically disperse, dynamic virtual worlds. Persons responsible for the design, configuration, and control of distributed simulations need to understand the impact of decisions made regarding the allocation and use of the logical and physical resources that comprise a distributed simulation environment and how they effect run-time performance. Distributed Interactive Simulation (DIS) and High Level Architecture (HLA) simulation applications historically provide some of the most demanding distributed computing environments in terms of performance, and as such have a justified need for performance information sufficient to support decision-makers trying to improve system behavior. This research addresses two fundamental questions: (1) Is there an analysis framework suitable for characterizing DIS and HLA simulation performance? and (2) what kind of mechanism can be used to adequately monitor, measure, and collect performance data to support different performance analysis objectives for DIS and HLA simulations? This thesis presents a unified, architectural framework for DIS and HLA simulations, provides details on a performance monitoring system, and shows its effectiveness through a series of use cases that include practical applications of the framework to support real-world U.S. Department of Defense (DoD) programs. The thesis also discusses the robustness of the constructed framework and its applicability to performance analysis of more general distributed computing applications
    • …
    corecore