1,801 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments

    Get PDF
    Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot’s coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization is addressed, when no information about the initial position is known, and the robot relies only on its sensors. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor. In this work, various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external information with motion information of the robot, odometry. Therefore, the system is capable of finding its own position in indoor environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. The implemented algorithms are Differential Evolution, Particle Swarm Optimization, and Invasive Weed Optimization, which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals, and the colonizing behavior of invasive species of plants respectively. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, with exploratory behavior and the convergence conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. By modeling the laser sensor as a probability distribution over the measured distance, the algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much can be evaluated. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filters and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále

    Stochastic Greedy-Based Particle Swarm Optimization for Workflow Application in Grid

    Get PDF
    The workflow application is a common grid application. The objective of a workflow application is to complete all the tasks within the shortest time, i.e., minimal makespan. A job scheduler with a high-efficient scheduling algorithm is required to solve workflow scheduling based on grid information. Scheduling problems are NP-complete problems, which have been well solved by metaheuristic algorithms. To attain effective solutions to workflow application, an algorithm named the stochastic greedy PSO (SGPSO) is proposed to solve workflow scheduling; a new velocity update rule based on stochastic greedy is suggested. Restated, a stochastic greedy-driven search guidance is provided to particles. Meanwhile, a stochastic greedy probability (SGP) parameter is designed to help control whether the search behavior of particles is exploitation or exploration to improve search efficiency. The advantages of the proposed scheme are retaining exploration capability during a search, reducing complexity and computation time, and easy to implement. Retaining exploration capability during a search prevents particles from getting trapped on local optimums. Additionally, the diversity of the proposed SGPSO is verified and analyzed. The experimental results demonstrate that the SGPSO proposed can effectively solve workflow class problems encountered in the grid environment

    Dynamic Task Scheduling in Remote Sensing Data Acquisition from Open-Access Data Using CloudSim

    Get PDF
    With the rapid development of cloud computing and network technologies, large-scale remote sensing data collection tasks are receiving more interest from individuals and small and medium-sized enterprises. Large-scale remote sensing data collection has its challenges, including less available node resources, short collection time, and lower collection efficiency. Moreover, public remote data sources have restrictions on user settings, such as access to IP, frequency, and bandwidth. In order to satisfy users’ demand for accessing public remote sensing data collection nodes and effectively increase the data collection speed, this paper proposes a TSCD-TSA dynamic task scheduling algorithm that combines the BP neural network prediction algorithm with PSO-based task scheduling algorithms. Comparative experiments were carried out using the proposed task scheduling algorithms on an acquisition task using data from Sentinel2. The experimental results show that the MAX-MAX-PSO dynamic task scheduling algorithm has a smaller fitness value and a faster convergence speed

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Storage Capacity Estimation of Commercial Scale Injection and Storage of CO2 in the Jacksonburg-Stringtown Oil Field, West Virginia

    Get PDF
    Geological capture, utilization and storage (CCUS) of carbon dioxide (CO2) in depleted oil and gas reservoirs is one method to reduce greenhouse gas emissions with enhanced oil recovery (EOR) and extending the life of the field. Therefore CCUS coupled with EOR is considered to be an economic approach to demonstration of commercial-scale injection and storage of anthropogenic CO2. Several critical issues should be taken into account prior to injecting large volumes of CO2, such as storage capacity, project duration and long-term containment. Reservoir characterization and 3D geological modeling are the best way to estimate the theoretical CO 2 storage capacity in mature oil fields. The Jacksonburg-Stringtown field, located in northwestern West Virginia, has produced over 22 million barrels of oil (MMBO) since 1895. The sandstone of the Late Devonian Gordon Stray is the primary reservoir.;The Upper Devonian fluvial sandstone reservoirs in Jacksonburg-Stringtown oil field, which has produced over 22 million barrels of oil since 1895, are an ideal candidate for CO2 sequestration coupled with EOR. Supercritical depth (\u3e2500 ft.), minimum miscible pressure (941 psi), favorable API gravity (46.5°) and good water flood response are indicators that facilitate CO 2-EOR operations. Moreover, Jacksonburg-Stringtown oil field is adjacent to a large concentration of CO2 sources located along the Ohio River that could potentially supply enough CO2 for sequestration and EOR without constructing new pipeline facilities.;Permeability evaluation is a critical parameter to understand the subsurface fluid flow and reservoir management for primary and enhanced hydrocarbon recovery and efficient carbon storage. In this study, a rapid, robust and cost-effective artificial neural network (ANN) model is constructed to predict permeability using the model\u27s strong ability to recognize the possible interrelationships between input and output variables. Two commonly available conventional well logs, gamma ray and bulk density, and three logs derived variables, the slope of GR, the slope of bulk density and Vsh were selected as input parameters and permeability was selected as desired output parameter to train and test an artificial neural network. The results indicate that the ANN model can be applied effectively in permeability prediction.;Porosity is another fundamental property that characterizes the storage capability of fluid and gas bearing formations in a reservoir. In this study, a support vector machine (SVM) with mixed kernels function (MKF) is utilized to construct the relationship between limited conventional well log suites and sparse core data. The input parameters for SVM model consist of core porosity values and the same log suite as ANN\u27s input parameters, and porosity is the desired output. Compared with results from the SVM model with a single kernel function, mixed kernel function based SVM model provide more accurate porosity prediction values.;Base on the well log analysis, four reservoir subunits within a marine-dominated estuarine depositional system are defined: barrier sand, central bay shale, tidal channels and fluvial channel subunits. A 3-D geological model, which is used to estimate theoretical CO2 sequestration capacity, is constructed with the integration of core data, wireline log data and geological background knowledge. Depending on the proposed 3-D geological model, the best regions for coupled CCUS-EOR are located in southern portions of the field, and the estimated CO2 theoretical storage capacity for Jacksonburg-Stringtown oil field vary between 24 to 383 million metric tons. The estimation results of CO2 sequestration and EOR potential indicate that the Jacksonburg-Stringtown oilfield has significant potential for CO2 storage and value-added EOR

    Investigation of service selection algorithms for grid services

    Get PDF
    Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes. Additionally, Grid computing has also leveraged web services to define standard interfaces for Grid services adopting the service-oriented view. Consequently, there have been significant efforts to enable applications capable of tackling computationally intensive problems as services on the Grid. In order to ensure that the available services are assigned to the high volume of incoming requests efficiently, it is important to have a robust service selection algorithm. The selection algorithm should not only increase access to the distributed services, promoting operational flexibility and collaboration, but should also allow service providers to scale efficiently to meet a variety of demands while adhering to certain current Quality of Service (QoS) standards. In this research, two service selection algorithms, namely the Particle Swarm Intelligence based Service Selection Algorithm (PSI Selection Algorithm) based on the Multiple Objective Particle Swarm Optimization algorithm using Crowding Distance technique, and the Constraint Satisfaction based Selection (CSS) algorithm, are proposed. The proposed selection algorithms are designed to achieve the following goals: handling large number of incoming requests simultaneously; achieving high match scores in the case of competitive matching of similar types of incoming requests; assigning each services efficiently to all the incoming requests; providing the service requesters the flexibility to provide multiple service selection criteria based on a QoS metric; selecting the appropriate services for the incoming requests within a reasonable time. Next, the two algorithms are verified by a standard assignment problem algorithm called the Munkres algorithm. The feasibility and the accuracy of the proposed algorithms are then tested using various evaluation methods. These evaluations are based on various real world scenarios to check the accuracy of the algorithm, which is primarily based on how closely the requests are being matched to the available services based on the QoS parameters provided by the requesters

    Investigation of service selection algorithms for grid services

    Get PDF
    Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes. Additionally, Grid computing has also leveraged web services to define standard interfaces for Grid services adopting the service-oriented view. Consequently, there have been significant efforts to enable applications capable of tackling computationally intensive problems as services on the Grid. In order to ensure that the available services are assigned to the high volume of incoming requests efficiently, it is important to have a robust service selection algorithm. The selection algorithm should not only increase access to the distributed services, promoting operational flexibility and collaboration, but should also allow service providers to scale efficiently to meet a variety of demands while adhering to certain current Quality of Service (QoS) standards. In this research, two service selection algorithms, namely the Particle Swarm Intelligence based Service Selection Algorithm (PSI Selection Algorithm) based on the Multiple Objective Particle Swarm Optimization algorithm using Crowding Distance technique, and the Constraint Satisfaction based Selection (CSS) algorithm, are proposed. The proposed selection algorithms are designed to achieve the following goals: handling large number of incoming requests simultaneously; achieving high match scores in the case of competitive matching of similar types of incoming requests; assigning each services efficiently to all the incoming requests; providing the service requesters the flexibility to provide multiple service selection criteria based on a QoS metric; selecting the appropriate services for the incoming requests within a reasonable time. Next, the two algorithms are verified by a standard assignment problem algorithm called the Munkres algorithm. The feasibility and the accuracy of the proposed algorithms are then tested using various evaluation methods. These evaluations are based on various real world scenarios to check the accuracy of the algorithm, which is primarily based on how closely the requests are being matched to the available services based on the QoS parameters provided by the requesters
    corecore