1,058 research outputs found

    CloudBench: an integrated evaluation of VM placement algorithms in clouds

    Get PDF
    A complex and important task in the cloud resource management is the efficient allocation of virtual machines (VMs), or containers, in physical machines (PMs). The evaluation of VM placement techniques in real-world clouds can be tedious, complex and time-consuming. This situation has motivated an increasing use of cloud simulators that facilitate this type of evaluations. However, most of the reported VM placement techniques based on simulations have been evaluated taking into account one specific cloud resource (e.g., CPU), whereas values often unrealistic are assumed for other resources (e.g., RAM, awaiting times, application workloads, etc.). This situation generates uncertainty, discouraging their implementations in real-world clouds. This paper introduces CloudBench, a methodology to facilitate the evaluation and deployment of VM placement strategies in private clouds. CloudBench considers the integration of a cloud simulator with a real-world private cloud. Two main tools were developed to support this methodology, a specialized multi-resource cloud simulator (CloudBalanSim), which is in charge of evaluating VM placement techniques, and a distributed resource manager (Balancer), which deploys and tests in a real-world private cloud the best VM placement configurations that satisfied user requirements defined in the simulator. Both tools generate feedback information, from the evaluation scenarios and their obtained results, which is used as a learning asset to carry out intelligent and faster evaluations. The experiments implemented with the CloudBench methodology showed encouraging results as a new strategy to evaluate and deploy VM placement algorithms in the cloud.This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the Grant TIN2016-79637-P “Towards Unifcation of HPC and Big Data Paradigms” and by the Mexican Council of Science and Technology (CONACYT) through a Ph.D. Grant (No. 212677)

    TRACTOR: Traffic‐aware and power‐efficient virtual machine placement in edge‐cloud data centers using artificial bee colony optimization

    Get PDF
    Technology providers heavily exploit the usage of edge‐cloud data centers (ECDCs) to meet user demand while the ECDCs are large energy consumers. Concerning the decrease of the energy expenditure of ECDCs, task placement is one of the most prominent solutions for effective allocation and consolidation of such tasks onto physical machine (PM). Such allocation must also consider additional optimizations beyond power and must include other objectives, including network‐traffic effectiveness. In this study, we present a multi‐objective virtual machine (VM) placement scheme (considering VMs as fog tasks) for ECDCs called TRACTOR, which utilizes an artificial bee colony optimization algorithm for power and network‐aware assignment of VMs onto PMs. The proposed scheme aims to minimize the network traffic of the interacting VMs and the power dissipation of the data center's switches and PMs. To evaluate the proposed VM placement solution, the Virtual Layer 2 (VL2) and three‐tier network topologies are modeled and integrated into the CloudSim toolkit to justify the effectiveness of the proposed solution in mitigating the network traffic and power consumption of the ECDC. Results indicate that our proposed method is able to reduce power energy consumption by 3.5% while decreasing network traffic and power by 15% and 30%, respectively, without affecting other QoS parameters

    AI augmented Edge and Fog computing: trends and challenges

    Get PDF
    In recent years, the landscape of computing paradigms has witnessed a gradual yet remarkable shift from monolithic computing to distributed and decentralized paradigms such as Internet of Things (IoT), Edge, Fog, Cloud, and Serverless. The frontiers of these computing technologies have been boosted by shift from manually encoded algorithms to Artificial Intelligence (AI)-driven autonomous systems for optimum and reliable management of distributed computing resources. Prior work focuses on improving existing systems using AI across a wide range of domains, such as efficient resource provisioning, application deployment, task placement, and service management. This survey reviews the evolution of data-driven AI-augmented technologies and their impact on computing systems. We demystify new techniques and draw key insights in Edge, Fog and Cloud resource management-related uses of AI methods and also look at how AI can innovate traditional applications for enhanced Quality of Service (QoS) in the presence of a continuum of resources. We present the latest trends and impact areas such as optimizing AI models that are deployed on or for computing systems. We layout a roadmap for future research directions in areas such as resource management for QoS optimization and service reliability. Finally, we discuss blue-sky ideas and envision this work as an anchor point for future research on AI-driven computing systems

    Power aware resource allocation and virtualization algorithms for 5G core networks

    Get PDF
    Most of the algorithms that solved the resource allocation problem, used to apply greedy algorithms to select the physical nodes and shortest paths to select the physical edges, without sufficient coordination between selecting the physical nodes and edges. This lack of coordination may degrade the overall acceptance ratios and network performance as whole, in addition, that may include non-necessary physical resources, which will consume more power and computational processing capacities, as well as cause more delays. Therefore, the main objective of this PhD thesis is to develop power aware resource allocation and virtualization algorithms for 5G core networks, which will be achieved through developing a virtualization resource allocation technique to perform virtual nodes and edges allocations in full coordination, and on the least physical resources. The algorithms will be general and solve the resource allocation problem for virtual network embedding and network function virtualization frameworks, while minimizing the total consumed power in the physical network, and consider end-to-end delay and migration as new optional features. This thesis suggested to solve the power aware resource allocation problem through brand new algorithms adopting a new technique called segmentation, which fully coordinates allocating the virtual nodes and edges together, and guarantees to use the very least physical resources to minimize the total power consumption, through consolidating the virtual machines into least number of nodes as much as possible. The proposed algorithms, solves virtual network embedding problem for off-line and on-line scenarios, and solves resource allocations for network function virtualization environment for off-line, on-line, and migration scenarios. The evaluations of the proposed off-line virtual network embedding algorithm, PaCoVNE, showed that it managed to save physical network power consumption by 57% in average, and the on-line algorithm, oPaCoVNE, managed to minimize the average power consumption in the physical network by 24% in average. Regarding allocation times of PaCoVNE and oPaCoVNE, they were in the ranges of 20-40 ms. For network function virtualization environment, the evaluations of the proposed offline NFV power aware algorithm, PaNFV, showed that on average it had lower total costs and lower migration cost by 32% and 65:5% respectively, compared to the state-of-art algorithms, while the on-line algorithm, oPaNFV, managed to allocate the Network Services in average times of 60 ms, and it had very negligible migrations. Nevertheless, this thesis suggests that future enhancements for the proposed algorithms need to be focused around modifying the proposed segmentation technique to solve the resource allocation problem for multiple paths, in addition to consider power aware network slicing, especially for mobile edge computing, and modify the algorithms for application aware resource allocations for very large scale networks. Moreover, future work can modify the segmentation technique and the proposed algorithms, by integrating machine learning techniques for smart traffic and optimal paths prediction, as well as applying machine learning for better energy efficiency, faster load balancing, much accurate resource allocations based on verity of quality of service metrics.La mayoría de los algoritmos que resolvieron el problema de asignación de recursos, se utilizaron para aplicar algoritmos codiciosos para seleccionar los nodos físicos y las rutas más cortas para seleccionar los bordes físicos, sin una coordinación suficiente entre la selección de los nodos físicos y los bordes. Esta falta de coordinación puede degradar los índices de aceptación generales y el rendimiento de la red en su totalidad, además, que puede incluir recursos físicos no necesarios, que consumirán más potencia y capacidades de procesamiento computacional, además de causar más retrasos. Por lo tanto, el objetivo principal de esta tesis doctoral es desarrollar algoritmos de virtualización y asignación de recursos para las redes centrales 5G, que se lograrán mediante el desarrollo de una técnica de asignación de recursos de virtualización para realizar nodos virtuales y asignaciones de bordes en total coordinación, y al menos recursos físicos. Los algoritmos serán generales y resolverán el problema de asignación de recursos para la integración de redes virtuales y los marcos de virtualización de funciones de red, al tiempo que minimizan la potencia total consumida en la red física y consideran el retraso y la migración de extremo a extremo como nuevas características opcionales. Esta tesis sugirió resolver el problema de la asignación de recursos conscientes de la potencia a través de nuevos algoritmos que adoptan una nueva técnica llamada segmentación, que coordina completamente la asignación de los nodos virtuales y los bordes, y garantiza el uso de los recursos físicos mínimos para minimizar el consumo total de energía, a través de consolidar las máquinas virtuales en el menor número de nodos tanto como sea posible. Los algoritmos propuestos solucionan el problema de integración de la red virtual para los escenarios sin conexión y en línea, y resuelve las asignaciones de recursos para el entorno de virtualización de la función de red para los escenarios sin conexión, en línea y de migración. Las evaluaciones del algoritmo de integración de red virtual sin conexión propuesto, PaCoVNE, mostraron que logró ahorrar el consumo de energía de la red física en un 57% en promedio, y el algoritmo en línea, oPaCoVNE, logró minimizar el consumo de energía promedio en la red física en un 24% en promedio. Con respecto a los tiempos de asignación de PaCoVNE y oPaCoVNE, estuvieron en los rangos de 20-40 ms. Para el entorno de virtualización de la función de red, las evaluaciones del algoritmo consciente de la potencia NFV sin conexión propuesto, PaNFV, mostraron que, en promedio, tenía menores costos totales y menores costos de migración en un 32% y 65: 5% respectivamente, en comparación con el estado de la técnica. Los algoritmos, mientras que el algoritmo en línea, oPaNFV, logró asignar los Servicios de Red en tiempos promedio de 60 ms, y tuvo migraciones muy insignificantes. Sin embargo, esta tesis sugiere que las futuras mejoras para los algoritmos propuestos deben centrarse en modificar la técnica de segmentación propuesta para resolver el problema de asignación de recursos para múltiples rutas, además de considerar el corte de la red que requiere energía, especialmente para la computación de borde móvil, y modificar el Algoritmos para asignaciones de recursos conscientes de la aplicación para redes de gran escala. Además, el trabajo futuro puede modificar la técnica de segmentación y los algoritmos propuestos, mediante la integración de técnicas de aprendizaje automático para el tráfico inteligente y la predicción de rutas óptimas, así como la aplicación del aprendizaje automático para una mejor eficiencia energética, un equilibrio de carga más rápido, asignaciones de recursos mucho más precisas basadas en la veracidad de Métricas de calidad de servicio

    Exposed Nerves and Archival Impulses: Digital Ruination and the Death of Adobe Flash

    Get PDF
    Senior Project submitted to The Division of Social Studies of Bard College

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Situating Data

    Get PDF
    Taking up the challenges of the datafication of culture, as well as of the scholarship of cultural inquiry itself, this collection contributes to the critical debate about data and algorithms. How can we understand the quality and significance of current socio-technical transformations that result from datafication and algorithmization? How can we explore the changing conditions and contours for living within such new and changing frameworks? How can, or should we, think and act within, but also in response to these conditions? This collection brings together various perspectives on the datafication and algorithmization of culture from debates and disciplines within the field of cultural inquiry, specifically (new) media studies, game studies, urban studies, screen studies, and gender and postcolonial studies. It proposes conceptual and methodological directions for exploring where, when, and how data and algorithms (re)shape cultural practices, create (in)justice, and (co)produce knowledge

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin
    corecore