74 research outputs found

    A Self-Tuning procedure for resource management in InterCloud Computing

    Get PDF
    Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, College of Computer Science, North China University of Technology, Beijing, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.InterCloud Computing is a new cloud paradigm designed to guarantee service quality or performance and availability of on-demand resources. InterCloud enables cloud interoperability by promoting the interworking of cloud systems from different cloud providers using standard interfacing. Resource management in InterCloud, considered as an important functional requirement, has not attracted commensurate research attention. The focus of this paper is to propose a Software Cybernetic approach, in the form of an adaptive control framework, for efficient management of shared resources in peer-to-peer InterCloud computing. This research effort adopts cooperative game theory to model resource management in InterCloud. The space of cooperative arrangements (resource sharing) between the participant cloud systems is presented by using Integer Partitioning to characterise the worst case communication complexity in peer to peer InterCloud. Essentially, this paper presents an Integer partition based anytime algorithm as an optimal cost solution to the bi-objective optimisation problem in resource management, anchored principally on practical trade-off between the desired performance (quality of service) and communication complexity of collaborating resource clouds

    A Game-Theoretic Based QoS-Aware Capacity Management for Real-Time EdgeIoT Applications

    Get PDF
    More and more real-time IoT applications such as smart cities or autonomous vehicles require big data analytics with reduced latencies. However, data streams produced from distributed sensing devices may not suffice to be processed traditionally in the remote cloud due to: (i) longer Wide Area Network (WAN) latencies and (ii) limited resources held by a single Cloud. To solve this problem, a novel Software-Defined Network (SDN) based InterCloud architecture is presented for mobile edge computing environments, known as EdgeIoT. An adaptive resource capacity management approach is proposed to employ a policy-based QoS control framework using principles in coalition games with externalities. To optimise resource capacity policy, the proposed QoS management technique solves, adaptively, a lexicographic ordering bi-criteria Coalition Structure Generation (CSG) problem. It is an onerous task to guarantee in a deterministic way that a real-time EdgeIoT application satisfies low latency requirement specified in Service Level Agreements (SLA). CloudSim 4.0 toolkit is used to simulate an SDN-based InterCloud scenario, and the empirical results suggest that the proposed approach can adapt, from an operational perspective, to ensure low latency QoS for real-time EdgeIoT application instances

    QoS-Aware Resource Management in SDN-Based InterClouds: A Software Cybernetics Perspective

    Get PDF
    Software Defined Networking (SDN) paradigm is introducing novel approaches for many unresolved issues of networking. These new outlooks are imperative in emerging scenarios where user requirements keep growing, the required bandwidth keeps increasing, and so does the variety of applications (e.g. Big data analytics) that suggest the network plays a more prominent role. As such, our research aims to provide a novel adaptive (self-tuning) resource management technique in SDN-based InterCloud environments. A quality of service (QoS) policy control framework is presented by modelling QoS-aware resource management as an adaptive control problem, and using principles in coalition games with externalities or partition form games (PFGs) as control mechanism. This on-going work outlines a proposed dynamic programming and anytime approach (Integer partition based) to solve the multi-criteria optimisation problem of a Markov decision process (MDP) model for system dynamics in SDN-based InterClouds

    Adaptable Service Oriented Infrastructure Provisioning with Lightweight Containers Virtualization Technology

    Get PDF
    Modern computing infrastructures should enable realization of converged provisioning and governance operations on virtualized computing, storage and network resources used on behalf of users' workloads. These workloads must have ensured sufficient access to the resources to satisfy required QoS. This requires flexible platforms providing functionality for construction, activation and governance of Runtime Infrastructure which can be realized according to Service Oriented Infrastructure (SOI) paradigm. Implementation of the SOI management framework requires definition of flexible architecture and utilization of advanced software engineering and policy-based techniques. The paper presents an Adaptable SOI Provisioning Platform which supports adaptable SOI provisioning with lightweight virtualization, compliant with the structured process model suitable for construction, activation and governance of IT environments. The requirements, architecture and implementation of the platform are all discussed. Practical usage of the platform is presented on the basis of a complex case study for provisioning JEE middleware on top of the Solaris 10 lightweight virtualization platform

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    Optimizing the cloud? Don't train models. Build oracles!

    Full text link
    We propose cloud oracles, an alternative to machine learning for online optimization of cloud configurations. Our cloud oracle approach guarantees complete accuracy and explainability of decisions for problems that can be formulated as parametric convex optimizations. We give experimental evidence of this technique's efficacy and share a vision of research directions for expanding its applicability.Comment: Initial conference submission limited to 6 page

    A Game-Theoretic Approach to Strategic Resource Allocation Mechanisms in Edge and Fog Computing

    Get PDF
    With the rapid growth of Internet of Things (IoT), cloud-centric application management raises questions related to quality of service for real-time applications. Fog and edge computing (FEC) provide a complement to the cloud by filling the gap between cloud and IoT. Resource management on multiple resources from distributed and administrative FEC nodes is a key challenge to ensure the quality of end-user’s experience. To improve resource utilisation and system performance, researchers have been proposed many fair allocation mechanisms for resource management. Dominant Resource Fairness (DRF), a resource allocation policy for multiple resource types, meets most of the required fair allocation characteristics. However, DRF is suitable for centralised resource allocation without considering the effects (or feedbacks) of large-scale distributed environments like multi-controller software defined networking (SDN). Nash bargaining from micro-economic theory or competitive equilibrium equal incomes (CEEI) are well suited to solving dynamic optimisation problems proposing to ‘proportionately’ share resources among distributed participants. Although CEEI’s decentralised policy guarantees load balancing for performance isolation, they are not faultproof for computation offloading. The thesis aims to propose a hybrid and fair allocation mechanism for rejuvenation of decentralised SDN controller deployment. We apply multi-agent reinforcement learning (MARL) with robustness against adversarial controllers to enable efficient priority scheduling for FEC. Motivated by software cybernetics and homeostasis, weighted DRF is generalised by applying the principles of feedback (positive or/and negative network effects) in reverse game theory (GT) to design hybrid scheduling schemes for joint multi-resource and multitask offloading/forwarding in FEC environments. In the first piece of study, monotonic scheduling for joint offloading at the federated edge is addressed by proposing truthful mechanism (algorithmic) to neutralise harmful negative and positive distributive bargain externalities respectively. The IP-DRF scheme is a MARL approach applying partition form game (PFG) to guarantee second-best Pareto optimality viii | P a g e (SBPO) in allocation of multi-resources from deterministic policy in both population and resource non-monotonicity settings. In the second study, we propose DFog-DRF scheme to address truthful fog scheduling with bottleneck fairness in fault-probable wireless hierarchical networks by applying constrained coalition formation (CCF) games to implement MARL. The multi-objective optimisation problem for fog throughput maximisation is solved via a constraint dimensionality reduction methodology using fairness constraints for efficient gateway and low-level controller’s placement. For evaluation, we develop an agent-based framework to implement fair allocation policies in distributed data centre environments. In empirical results, the deterministic policy of IP-DRF scheme provides SBPO and reduces the average execution and turnaround time by 19% and 11.52% as compared to the Nash bargaining or CEEI deterministic policy for 57,445 cloudlets in population non-monotonic settings. The processing cost of tasks shows significant improvement (6.89% and 9.03% for fixed and variable pricing) for the resource non-monotonic setting - using 38,000 cloudlets. The DFog-DRF scheme when benchmarked against asset fair (MIP) policy shows superior performance (less than 1% in time complexity) for up to 30 FEC nodes. Furthermore, empirical results using 210 mobiles and 420 applications prove the efficacy of our hybrid scheduling scheme for hierarchical clustering considering latency and network usage for throughput maximisation.Abubakar Tafawa Balewa University, Bauchi (Tetfund, Nigeria

    Containerization in Cloud Computing: performance analysis of virtualization architectures

    Get PDF
    La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Virtual Cluster Management for Analysis of Geographically Distributed and Immovable Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing, 2015Scenarios exist in the era of Big Data where computational analysis needs to utilize widely distributed and remote compute clusters, especially when the data sources are sensitive or extremely large, and thus unable to move. A large dataset in Malaysia could be ecologically sensitive, for instance, and unable to be moved outside the country boundaries. Controlling an analysis experiment in this virtual cluster setting can be difficult on multiple levels: with setup and control, with managing behavior of the virtual cluster, and with interoperability issues across the compute clusters. Further, datasets can be distributed among clusters, or even across data centers, so that it becomes critical to utilize data locality information to optimize the performance of data-intensive jobs. Finally, datasets are increasingly sensitive and tied to certain administrative boundaries, though once the data has been processed, the aggregated or statistical result can be shared across the boundaries. This dissertation addresses management and control of a widely distributed virtual cluster having sensitive or otherwise immovable data sets through a controller. The Virtual Cluster Controller (VCC) gives control back to the researcher. It creates virtual clusters across multiple cloud platforms. In recognition of sensitive data, it can establish a single network overlay over widely distributed clusters. We define a novel class of data, notably immovable data that we call "pinned data", where the data is treated as a first-class citizen instead of being moved to where needed. We draw from our earlier work with a hierarchical data processing model, Hierarchical MapReduce (HMR), to process geographically distributed data, some of which are pinned data. The applications implemented in HMR use extended MapReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Further, by facilitating information sharing among resources, applications, and data, the overall performance is improved. Experimental results show that the overhead of VCC is minimum. The HMR outperforms traditional MapReduce model while processing a particular class of applications. The evaluations also show that information sharing between resources and application through the VCC shortens the hierarchical data processing time, as well satisfying the constraints on the pinned data
    • …
    corecore