16,949 research outputs found

    Adaptation of System Dynamics Model Execution Algorithms for Cloud-based Environment

    Get PDF
    This paper presents a process of adaptation of system dynamics models execution algorithms to cloud-based environment. System dynamics is an aspect of systems theory as a method to understand the dynamic behaviour of complex systems. Existing modeling algorithms used in popular modeling solutions are either not available for free use or have several disadvantages which prevent them from being used in distributed cloud environment. Adaptation of execution algorithms aimed not only to adapt execution process to distributed parallel environments with higher reliability and wider range of possible applications, but also to improve system dynamics model execution performance. For example, existing algorithms of model execution which are not ready for distributed environments will fail to complete modeling task in case of hardware failure, and optimized ones are able to smoothly transfer execution process from one node to another with minimal impact on overall model execution progress. Such capabilities help to save many resources and, especially, time on execution re-runs. In this paper described algorithms and approaches designed for sdCloud solution which are focused on transferring execution of system dynamics models into distributed cloud-based environment and shown extra benefits brought to modeling process by shift to the cloud

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Robust resource management for time-critical tasks in the cloud-edge continuum

    Get PDF
    As an emerging distributed computing paradigm, the Cloud-edge continuum (CEC) leverages the strengths of both cloud computing and edge computing to provide efficient and effective services to end-users. CEC enables faster processing of data and provides multiple benefits, including scalability, data security, and improved quality of service. With the increasing demand for real-time data processing, the proliferation of the Internet of Things (IoT) devices, and the growing need for data privacy and security, CEC has been developing, evolving, and adapting quickly. Cloud computing provides scalable and flexible computing infrastructure, while edge computing offers low latency and location-awareness capabilities. How to schedule the tasks in a CEC among its exploding amount of resources is a challenge for both service providers and users. QoS (quality of service) or QoE (Quality of experience) are metrics that describe this process and are often adopted as the optimization objective. Among all kinds of resource management optimization approaches, learning-based task scheduling and offloading have gained popularity in recent years. To overcome these limitations, researchers have turned to machine learning techniques to develop more intelligent and adaptive resource management algorithms. However, the machine learning-based methods in CEC also face several challenges: 1. The performance of learning-based resource management is difficult to maintain when the pattern of time-critical tasks is dynamically changing;2. Learning-based resource management strategies are difficult to adapt when continuum resources are highly heterogeneous;3. Learning-based resource management suffers from low robustness when optimizing multiple objectives.My thesis tackles these challenges, and we propose a Meta-Learning-based resource management framework to deal with time-critical requests spanning from independent tasks to complex workflows in a dynamic cloud-edge continuum. Our goal is to improve the robustness and adaptivity of the resource management framework in highly changing environments

    Robust resource management for time-critical tasks in the cloud-edge continuum

    Get PDF
    As an emerging distributed computing paradigm, the Cloud-edge continuum (CEC) leverages the strengths of both cloud computing and edge computing to provide efficient and effective services to end-users. CEC enables faster processing of data and provides multiple benefits, including scalability, data security, and improved quality of service. With the increasing demand for real-time data processing, the proliferation of the Internet of Things (IoT) devices, and the growing need for data privacy and security, CEC has been developing, evolving, and adapting quickly. Cloud computing provides scalable and flexible computing infrastructure, while edge computing offers low latency and location-awareness capabilities. How to schedule the tasks in a CEC among its exploding amount of resources is a challenge for both service providers and users. QoS (quality of service) or QoE (Quality of experience) are metrics that describe this process and are often adopted as the optimization objective. Among all kinds of resource management optimization approaches, learning-based task scheduling and offloading have gained popularity in recent years. To overcome these limitations, researchers have turned to machine learning techniques to develop more intelligent and adaptive resource management algorithms. However, the machine learning-based methods in CEC also face several challenges: 1. The performance of learning-based resource management is difficult to maintain when the pattern of time-critical tasks is dynamically changing;2. Learning-based resource management strategies are difficult to adapt when continuum resources are highly heterogeneous;3. Learning-based resource management suffers from low robustness when optimizing multiple objectives.My thesis tackles these challenges, and we propose a Meta-Learning-based resource management framework to deal with time-critical requests spanning from independent tasks to complex workflows in a dynamic cloud-edge continuum. Our goal is to improve the robustness and adaptivity of the resource management framework in highly changing environments
    • …
    corecore