1,755 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Automated processing for map generalization using web services

    Get PDF
    In map generalization various operators are applied to the features of a map in order to maintain and improve the legibility of the map after the scale has been changed. These operators must be applied in the proper sequence and the quality of the results must be continuously evaluated. Cartographic constraints can be used to define the conditions that have to be met in order to make a map legible and compliant to the user needs. The combinatorial optimization approaches shown in this paper use cartographic constraints to control and restrict the selection and application of a variety of different independent generalization operators into an optimal sequence. Different optimization techniques including hill climbing, simulated annealing and genetic deep search are presented and evaluated experimentally by the example of the generalization of buildings in blocks. All algorithms used in this paper have been implemented in a web services framework. This allows the use of distributed and parallel processing in order to speed up the search for optimized generalization operator sequence

    Human-Centered Automation for Resilience in Acquiring Construction Field Information

    Get PDF
    abstract: Resilient acquisition of timely, detailed job site information plays a pivotal role in maintaining the productivity and safety of construction projects that have busy schedules, dynamic workspaces, and unexpected events. In the field, construction information acquisition often involves three types of activities including sensor-based inspection, manual inspection, and communication. Human interventions play critical roles in these three types of field information acquisition activities. A resilient information acquisition system is needed for safer and more productive construction. The use of various automation technologies could help improve human performance by proactively providing the needed knowledge of using equipment, improve the situation awareness in multi-person collaborations, and reduce the mental workload of operators and inspectors. Unfortunately, limited studies consider human factors in automation techniques for construction field information acquisition. Fully utilization of the automation techniques requires a systematical synthesis of the interactions between human, tasks, and construction workspace to reduce the complexity of information acquisition tasks so that human can finish these tasks with reliability. Overall, such a synthesis of human factors in field data collection and analysis is paving the path towards “Human-Centered Automation” (HCA) in construction management. HCA could form a computational framework that supports resilient field data collection considering human factors and unexpected events on dynamic job sites. This dissertation presented an HCA framework for resilient construction field information acquisition and results of examining three HCA approaches that support three use cases of construction field data collection and analysis. The first HCA approach is an automated data collection planning method that can assist 3D laser scan planning of construction inspectors to achieve comprehensive and efficient data collection. The second HCA approach is a Bayesian model-based approach that automatically aggregates the common sense of people from the internet to identify job site risks from a large number of job site pictures. The third HCA approach is an automatic communication protocol optimization approach that maximizes the team situation awareness of construction workers and leads to the early detection of workflow delays and critical path changes. Data collection and simulation experiments extensively validate these three HCA approaches.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    Steady-State for Batches of Identical Task Graphs

    Get PDF
    International audienceIn this paper, we focus on the problem of scheduling batches of identical task graphs on a heterogeneous platform, when the task graph consists in a tree. We rely on steady-state scheduling, and aim at reaching the optimal throughput of the system. Contrarily to previous studies, we concentrate upon the scheduling of batches of limited size. We try to reduce the processing time of each instance, thus making steady-state scheduling applicable to smaller batches. The problem is proven NP-complete, and a mixed integer program is presented to solve it. Then, different solutions, using steady-state scheduling or not, are evaluated through comprehensive simulations

    Database Workload Management (Dagstuhl Seminar 12282)

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 12282 "Database Workload Management". Dagstuhl Seminar 12282 was designed to provide a venue where researchers can engage in dialogue with industrial participants for an in-depth exploration of challenging industrial workloads, where industrial participants can challenge researchers to apply the lessons-learned from their large-scale experiments to multiple real systems, and that would facilitate the release of real workloads that can be used to drive future research, and concrete measures to evaluate and compare workload management techniques in the context of these workloads

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required
    • …
    corecore