391 research outputs found

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Hybrid Simulation and Test of Vessel Traffic Systems on the Cloud

    Get PDF
    This paper presents a cloud-based hybrid simulation platform to test large-scale distributed System-of-Systems (SoS) for the management and control of maritime traffic, the so-called Vessel Traffic Systems (VTS). A VTS consists of multiple, heterogeneous, distributed and interoperating systems, including radar, automatic identification systems, direction finders, electro-optical sensors, gateways to external VTSs, information systems; identifying, representing and analyzing interactions is a challenge to the evaluation of the real risks for safety and security of the marine environment. The need for reproducing in fabric the system behaviors that could occur in situ demands for the ability of integrating emulated and simulated environments to cope with the different testability requirements of involved systems and to keep testing cost sustainable. The platform exploits hybrid simulation and virtualization technologies, and it is deployable on a private cloud, reducing the cost of setting up realistic and effective testing scenarios

    Toward Customizable Multi-tenant SaaS Applications

    Get PDF
    abstract: Nowadays, Computing is so pervasive that it has become indeed the 5th utility (after water, electricity, gas, telephony) as Leonard Kleinrock once envisioned. Evolved from utility computing, cloud computing has emerged as a computing infrastructure that enables rapid delivery of computing resources as a utility in a dynamically scalable, virtualized manner. However, the current industrial cloud computing implementations promote segregation among different cloud providers, which leads to user lockdown because of prohibitive migration cost. On the other hand, Service-Orented Computing (SOC) including service-oriented architecture (SOA) and Web Services (WS) promote standardization and openness with its enabling standards and communication protocols. This thesis proposes a Service-Oriented Cloud Computing Architecture by combining the best attributes of the two paradigms to promote an open, interoperable environment for cloud computing development. Mutil-tenancy SaaS applicantions built on top of SOCCA have more flexibility and are not locked down by a certain platform. Tenants residing on a multi-tenant application appear to be the sole owner of the application and not aware of the existence of others. A multi-tenant SaaS application accommodates each tenant’s unique requirements by allowing tenant-level customization. A complex SaaS application that supports hundreds, even thousands of tenants could have hundreds of customization points with each of them providing multiple options, and this could result in a huge number of ways to customize the application. This dissertation also proposes innovative customization approaches, which studies similar tenants’ customization choices and each individual users behaviors, then provides guided semi-automated customization process for the future tenants. A semi-automated customization process could enable tenants to quickly implement the customization that best suits their business needs.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Scheduling of a Cyber-Physical System Simulation

    Get PDF
    The work carried out in this Ph.D. thesis is part of a broader effort to automate industrial simulation systems. In the aeronautics industry, and more especially within Airbus, the historical application of simulation is pilot training. There are also more recent uses in the design of systems, as well as in the integration of these systems. These latter applications require a very high degree of representativeness, where historically the most important factor has been the pilot’s feeling. Systems are now divided into several subsystems that are designed, implemented and validated independently, in order to maintain their control despite the increase in their complexity, and the reduction in time-to-market. Airbus already has expertise in the simulation of these subsystems, as well as their integration into a simulation. This expertise is empirical; simulation specialists use the previous integrations schedulings and adapt it to a new integration. This is a process that can sometimes be time-consuming and can introduce errors. The current trends in the industry are towards flexible production methods, integration of logistics tools for tracking, use of simulation tools in production, as well as resources optimization. Products are increasingly iterations of older, improved products, and tests and simulations are increasingly integrated into their life cycles. Working empirically in an industry that requires flexibility is a constraint, and nowadays it is essential to facilitate the modification of simulations. The problem is, therefore, to set up methods and tools allowing a priori to generate representative simulation schedules. In order to solve this problem, we have developed a method to describe the elements of a simulation, as well as how this simulation can be executed, and functions to generate schedules. Subsequently, we implemented a tool to automate the scheduling search, based on heuristics. Finally, we tested and verified our method and tools in academic and industrial case studies

    Distributed simulation optimization and parameter exploration framework for the cloud

    Get PDF
    Simulation models are becoming an increasingly popular tool for the analysis and optimization of complex real systems in different fields. Finding an optimal system design requires performing a large sweep over the parameter space in an organized way. Hence, the model optimization process is extremely demanding from a computational point of view, as it requires careful, time-consuming, complex orchestration of coordinated executions. In this paper, we present the design of SOF (Simulation Optimization and exploration Framework in the cloud), a framework which exploits the computing power of a cloud computational environment in order to carry out effective and efficient simulation optimization strategies. SOF offers several attractive features. Firstly, SOF requires “zero configuration” as it does not require any additional software installed on the remote node; only standard Apache Hadoop and SSH access are sufficient. Secondly, SOF is transparent to the user, since the user is totally unaware that the system operates on a distributed environment. Finally, SOF is highly customizable and programmable, since it enables the running of different simulation optimization scenarios using diverse programming languages – provided that the hosting platform supports them – and different simulation toolkits, as developed by the modeler. The tool has been fully developed and is available on a public repository1 under the terms of the open source Apache License. It has been tested and validated on several private platforms, such as a dedicated cluster of workstations, as well as on public platforms, including the Hortonworks Data Platform and Amazon Web Services Elastic MapReduce solution

    A European Platform for Distributed Real Time Modelling & Simulation of Emerging Electricity Systems

    Get PDF
    This report presents the proposal for the constitution of a European platform consisting of the federation of real-time modelling and simulation facilities applied to the analysis of emerging electricity systems. Such a platform can be understood as a pan-European distributed laboratory aiming at making use of the best available relevant resources and knowledge for the sake of supporting industry and policy makers and conducting advanced scientific research. The report describes the need for such a platform, with reference to the current status of power systems; the state of the art of the relevant technologies; and the character and format that the platform might take. This integrated distributed laboratory will facilitate the modelling, testing and assessment of power systems beyond the capacities of each single entity, enabling remote access to software and equipment anywhere in the EU, by establishing a real-time interconnection to the available facilities and capabilities within the Member States. Such an infrastructure will support the remote testing of devices, enhance simulation capabilities for large multi-scale and multi-layer systems, while also achieving soft-sharing of expertise in a large knowledge-based virtual environment. Furthermore the platform should offer the possibility of keeping confidential all susceptible data/models/algorithms, enabling the participants to determine which specific data will be shared with other actors. This kind of simulation platform will benefit all actors that need to take decisions in the power system area. This includes national and local authorities, regulators, network operators and utilities, manufacturers, consumers/prosumers. The federation of labs is created through real-time remote access to high-performance computing, data infrastructure and hardware and software components (electrical, electronic, ICT) assured by the interconnection of different labs with a server-cloud architecture where the local computers or machines interact with other labs through dedicated VPN (Virtual Private Network) over the GEANT network (the pan-European research and education network that interconnects Europe’s National Research and Education Networks ). The local VPN servers bridge the local simulation platform at each site and the cloud ensuring the security of the data exchange while offering a better coordination of the communication and the multi-point connection. It is then possible the integration of the different sub-systems (distribution grid, transmission grid, generation, market, and consumer behaviour) with a holistic approach

    Modeling, Design And Evaluation Of Networking Systems And Protocols Through Simulation

    Get PDF
    Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet\u27s rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription
    • …
    corecore