886 research outputs found

    Data-aware workflow scheduling in heterogeneous distributed systems

    Get PDF
    Data transferring in scientific workflows gradually attracts more attention due to large amounts of data generated by complex scientific workflows will significantly increase the turnaround time of the whole workflow. It is almost impossible to make an optimal or approximate optimal scheduling for the end-to-end workflow without considering the intermediate data movement. In order to reduce the complexity of the workflow-scheduling problem, most researches done so far are constrained by many unrealistic assumptions, which result in non-optimal scheduling in practice. A constraint imposed by most researchers in their algorithms is that a computation site can only start the execution of other tasks after it has completed the execution of the current task and delivered the data generated by this task. We relax this constraint and allow overlap of execution and data movement in order to improve the parallelism of the tasks in the workflow. Furthermore, we generalize the conventional workflow to allow data to be staged in(out) from(to) remote data centers, design and implement an efficient data-aware scheduling strategy. The experimental results show that the turnaround time is reduced significantly in heterogeneous distributed systems by applying our scheduling strategy. To reduce the end-to-end workflow turnaround time, it is crucial to deliver the input, output and intermediate data as fast as possible. However, it is quite often that the throughput is much lower than expected while using single TCP stream to transfer data when the bandwidth of the network is not fully utilized. Multiple TCP streams will benefit the throughput. However, the throughput does not increase monotonically when increasing the number of parallel streams. Based on this observation, we propose to improve the existing throughput prediction models, design and implement a TCP throughput estimation and optimization service in the distributed systems to figure out the optimal configurations of TCP parallel streams. Experimental results show that the proposed estimation and optimization service can predict the throughput dynamically with high accuracy and the throughput can be increased significantly. Throughput optimization along with data-aware workflow scheduling allows us to minimize the end-to-end workflow turnaround time successfully

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Hybrid ant colony system and genetic algorithm approach for scheduling of jobs in computational grid

    Get PDF
    Metaheuristic algorithms have been used to solve scheduling problems in grid computing.However, stand-alone metaheuristic algorithms do not always show good performance in every problem instance. This study proposes a high level hybrid approach between ant colony system and genetic algorithm for job scheduling in grid computing.The proposed approach is based on a high level hybridization.The proposed hybrid approach is evaluated using the static benchmark problems known as ETC matrix.Experimental results show that the proposed hybridization between the two algorithms outperforms the stand-alone algorithms in terms of best and average makespan values

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Dimensionerings- en werkverdelingsalgoritmen voor lambda grids

    Get PDF
    Grids bestaan uit een verzameling reken- en opslagelementen die geografisch verspreid kunnen zijn, maar waarvan men de gezamenlijke capaciteit wenst te benutten. Daartoe dienen deze elementen verbonden te worden met een netwerk. Vermits veel wetenschappelijke applicaties gebruik maken van een Grid, en deze applicaties doorgaans grote hoeveelheden data verwerken, is het noodzakelijk om een netwerk te voorzien dat dergelijke grote datastromen op betrouwbare wijze kan transporteren. Optische transportnetwerken lenen zich hier uitstekend toe. Grids die gebruik maken van dergelijk netwerk noemt men lambda Grids. Deze thesis beschrijft een kader waarin het ontwerp en dimensionering van optische netwerken voor lambda Grids kunnen beschreven worden. Ook wordt besproken hoe werklast kan verdeeld worden op een Grid eens die gedimensioneerd is. Een groot deel van de resultaten werd bekomen door simulatie, waarbij gebruik gemaakt wordt van een eigen Grid simulatiepakket dat precies focust op netwerk- en Gridelementen. Het ontwerp van deze simulator, en de daarbijhorende implementatiekeuzes worden dan ook uitvoerig toegelicht in dit werk

    RSCCGA: Resource Scheduling for Cloud Computing by Genetic Algorithm

    Get PDF
    Cloud computing, also known as on-the-line computing, is a kind of Internet-based computing that provides shared processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources, which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in third-party data centers. It relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over a network. the scheduling problem is an important issue in the management of resources in the cloud, because despite many requests the data center there is the possibility of scheduling manually. Therefore, the scheduling algorithms play an important role in cloud computing, because the goal of scheduling is to reduce response times and improve resource utilization. The computing resources, either software or hardware, are virtualized and allocated as services from providers to users. The computing resources can be allocated dynamically upon the requirements and preferences of consumers. Traditional system-centric resource management architecture cannot process the resource assignment task and dynamically allocate the available resources in a cloud computing environment. This paper proposed a resource scheduling model for cloud computing based on the genetic algorithm. Experiments show that proposed method has more performance than other methods.Keywords: Cloud Computing, Resource Management, Scheduling, Bandwidth Consumption, Waiting Time, Genetic algorith

    Improved time quantum length estimation for round robin scheduling algorithm using neural network

    Get PDF
    In most cases, the quantum time length is taken to be fix in all applications that use Round Robin (RR) scheduling algorithm. Many attempts aim to determination of the optimal length of the quantum that results in a small average turnaround time, but the unknown nature of the tasks in the ready queue make the problem more complicated: Considering a large quantum length makes the RR algorithm behave like a First Come First Served (FIFO) scheduling algorithm, and a small quantum length cause high number of contexts switching. In this paper we propose a RR scheduling algorithm based on Neural Network Models for predicting the optimal quantum length which lead to a minimum average turnaround time. The quantum length depends on tasks burst times available in the ready queue. Rather than conventional traditional methods using fixed quantum length, this one giving better results by minimizing the average turnaround time for almost any set of jobs in the ready queue
    • …
    corecore