4 research outputs found

    Task scheduling mechanisms for fog computing: A systematic survey

    Get PDF
    In the Internet of Things (IoT) ecosystem, some processing is done near data production sites at higher speeds without the need for high bandwidth by combining Fog Computing (FC) and cloud computing. Fog computing offers advantages for real-time systems that require high speed internet connectivity. Due to the limited resources of fog nodes, one of the most important challenges of FC is to meet dynamic needs in real-time. Therefore, one of the issues in the fog environment is the optimal assignment of tasks to fog nodes. An efficient scheduling algorithm should reduce various qualitative parameters such as cost and energy consumption, taking into account the heterogeneity of fog nodes and the commitment to perform tasks within their deadlines. This study provides a detailed taxonomy to gain a better understanding of the research issues and distinguishes important challenges in existing work. Therefore, a systematic overview of existing task scheduling techniques for cloud-fog environment, as well as their benefits and drawbacks, is presented in this article. Four main categories are introduced to study these techniques, including machine learning-based, heuristic-based, metaheuristic-based, and deterministic mechanisms. A number of papers are studied in each category. This survey also compares different task scheduling techniques in terms of execution time, resource utilization, delay, network bandwidth, energy consumption, execution deadline, response time, cost, uncertainty, and complexity. The outcomes revealed that 38% of the scheduling algorithms use metaheuristic-based mechanisms, 30% use heuristic-based, 23% use machine learning algorithms, and the other 9% use deterministic methods. The energy consumption is the most significant parameter addressed in most articles with a share of 19%. Finally, a number of important areas for improving the task scheduling methods in the FC in the future are presented

    Performance Evaluation of Query Plan Recommendation with Apache Hadoop and Apache Spark

    No full text
    Access plan recommendation is a query optimization approach that executes new queries using prior created query execution plans (QEPs). The query optimizer divides the query space into clusters in the mentioned method. However, traditional clustering algorithms take a significant amount of execution time for clustering such large datasets. The MapReduce distributed computing model provides efficient solutions for storing and processing vast quantities of data. Apache Spark and Apache Hadoop frameworks are used in the present investigation to cluster different sizes of query datasets in the MapReduce-based access plan recommendation method. The performance evaluation is performed based on execution time. The results of the experiments demonstrated the effectiveness of parallel query clustering in achieving high scalability. Furthermore, Apache Spark achieved better performance than Apache Hadoop, reaching an average speedup of 2x

    Performance Evaluation of Query Plan Recommendation with Apache Hadoop and Apache Spark

    No full text
    Access plan recommendation is a query optimization approach that executes new queries using prior created query execution plans (QEPs). The query optimizer divides the query space into clusters in the mentioned method. However, traditional clustering algorithms take a significant amount of execution time for clustering such large datasets. The MapReduce distributed computing model provides efficient solutions for storing and processing vast quantities of data. Apache Spark and Apache Hadoop frameworks are used in the present investigation to cluster different sizes of query datasets in the MapReduce-based access plan recommendation method. The performance evaluation is performed based on execution time. The results of the experiments demonstrated the effectiveness of parallel query clustering in achieving high scalability. Furthermore, Apache Spark achieved better performance than Apache Hadoop, reaching an average speedup of 2x
    corecore