1,969 research outputs found

    Towards a Deadline-Based Simulation Experimentation Framework Using Micro-Services Auto-Scaling Approach

    Get PDF
    There is growing number of research efforts in developing auto-scaling algorithms and tools for cloud resources. Traditional performance metrics such as CPU, memory and bandwidth usage for scaling up or down resources are not sufficient for all applications. For example, modeling and simulation experimentation is usually expected to yield results within a specific timeframe. In order to achieve this, often the quality of experiments is compromised either by restricting the parameter space to be explored or by limiting the number of replications required to give statistical confidence. In this paper, we present early stages of a deadline-based simulation experimentation framework using a micro-services auto-scaling approach. A case study of an agent-based simulation of a population physical activity behavior is used to demonstrate our framework

    Innovations in Simulation: Experiences with Cloud-based Simulation Experimentation

    Get PDF
    The amount of simulation experimentation that can be performed in a project can be restricted by time, especially if a model takes a long time to simulate and many replications are required. Cloud Computing presents an attractive proposition to speeding up, or extending, simulation experimentation as computing resources can be hired on demand rather than having to invest in costly infrastructure. However, it is not common practice for simulation users to take advantage of this and, arguably, rather than speeding up simulation experimentation users tend to make compromises by using unnecessary model simplification techniques. This may be due to a lack of awareness of what Cloud Computing can offer. Based on several years’ experience of innovation in this area, this article presents our experiences in developing Cloud Computing applications for simulation experimentation and discusses what future innovations might be created for the widespread benefit of our simulation community

    Containerisierung und container Orchestrierung als high-performance Infrastruktur fĂĽr Simulationsexperimente

    Get PDF
    The execution of simulation experiments becomes increasingly resourceintensive, either due to the increasing scale and complexity of the simulation model or the nature of the experiment itself. Popular approaches, like simulation-based optimisation and data farming, require extensive computational resources. Therefore, an appropriate methodology is needed to distribute simulation workloads onto complex computing infrastructures efficiently. Containerisation and container orchestration are promising approaches toward this goal. This paper discusses the requirements needed to utilise containerisation for simulation, gives an example of a container-based computational infrastructure for experimentation, and shows how containerisation and container orchestration benefit simulation execution

    Resource provisioning and scheduling algorithms for hybrid workflows in edge cloud computing

    Get PDF
    In recent years, Internet of Things (IoT) technology has been involved in a wide range of application domains to provide real-time monitoring, tracking and analysis services. The worldwide number of IoT-connected devices is projected to increase to 43 billion by 2023, and IoT technologies are expected to engaged in 25% of business sector. Latency-sensitive applications in scope of intelligent video surveillance, smart home, autonomous vehicle, augmented reality, are all emergent research directions in industry and academia. These applications are required connecting large number of sensing devices to attain the desired level of service quality for decision accuracy in a sensitive timely manner. Moreover, continuous data stream imposes processing large amounts of data, which adds a huge overhead on computing and network resources. Thus, latency-sensitive and resource-intensive applications introduce new challenges for current computing models, i.e, batch and stream. In this thesis, we refer to the integrated application model of stream and batch applications as a hybrid work ow model. The main challenge of the hybrid model is achieving the quality of service (QoS) requirements of the two computation systems. This thesis provides a systemic and detailed modeling for hybrid workflows which describes the internal structure of each application type for purposes of resource estimation, model systems tuning, and cost modeling. For optimizing the execution of hybrid workflows, this thesis proposes algorithms, techniques and frameworks to serve resource provisioning and task scheduling on various computing systems including cloud, edge cloud and cooperative edge cloud. Overall, experimental results provided in this thesis demonstrated strong evidences on the responsibility of proposing different understanding and vision on the applications of integrating stream and batch applications, and how edge computing and other emergent technologies like 5G networks and IoT will contribute on more sophisticated and intelligent solutions in many life disciplines for more safe, secure, healthy, smart and sustainable society

    Neural Adaptive Admission Control Framework: SLA-driven action termination for real-time application service management

    Get PDF
    Although most modern cloud-based enterprise systems, or operating systems, do not commonly allow configurable/automatic termination of processes, tasks or actions, it is common practice for systems administrators to manually terminate, or stop, tasks or actions at any level of the system. The paper investigates the potential of automatic adaptive control with action termination as a method for adapting the system to more appropriate conditions in environments with established goals for both system’s performance and economics. A machine-learning driven control mechanism, employing neural networks, is derived and applied within data-intensive systems. Control policies that have been designed following this approach are evaluated under different load patterns and service level requirements. The experimental results demonstrate performance characteristics and benefits as well as implications of termination control when applied to different action types with distinct run-time characteristics. An automatic termination approach may be eminently suitable for systems with harsh execution time Service Level Agreements, or systems running under conditions of hard pressure on power supply or other constraints. The proposed control mechanisms can be combined with other available toolkits to support deployment of autonomous controllers in high-dimensional enterprise information systems

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0
    • …
    corecore