8 research outputs found

    A Survey Paper on Optimization Based SDN Powered by Fog Computing

    Get PDF
    The demand of cloud computing is increasing day by day due to their wide range of applications. But cloud computing suffered from various demerits like lack of mobility, unreliable latency, and position awareness. These drawbacks are overcome by the fog computing or edge computing which providing elasticity to the resources and reliability to the latency. In this paper we are studied various researches related to the cloud computing and fog computing for different applications. Several challenges are also discussed while implementing edge computing to the network. The chances provided by the fog computing system also elaborated for the future work. Different applications are discussed with their advantages and outcomes of fog computing system. The real time applications like IIOT fog computing provided better computational time. All the characteristics and key features of fog computing are discussed in this work. We get an idea of using fog computing with optimization algorithm for our IIOT applications

    Scheduling in the industry 4.0: a systematic literature review

    Get PDF
    Industry 4.0 is characterised for being a new way of organising the supply chains, coordinating smart factories that should be capable of a higher adaptivity, making them more responsive to a continuously changing demand. This paper presents a Systematic Literature Review (SLR) with three main objectives. First, to identify in the literature on Industry 4.0, the need for new job scheduling methods for the factories of the digital era. Second, to identify in the literature of scheduling, which of these issues have been accomplished and what are the most critical gaps. Third, to propose a new research agenda on scheduling methodology, that fulfils the needs of scheduling in the field of Industry 4.0. The results show that literature related to the subject of study is rapidly growing and the needs of new methods for job scheduling in the digital factories concern two main ideas. First, the need to create and implement a digital architecture where data can be appropriately processed and second, the need of giving a decentralised machine scheduling solution inside such a framework. Although we can find some studies on small production lines, research with practical results remains scarce in the literature to date

    Using an HSV-based approach for detecting and grasping an object by the industrial manipulator system

    Get PDF
    In the context of the industrialization era, robots are gradually replacing workers in some production stages. There is an irreversible trend toward incorporating image processing techniques in the realm of robot control. In recent years, vision-based techniques have achieved significant milestones. However, most of these techniques require complex setups, specialized cameras, and skilled operators for burden computation. This paper presents an efficient vision-based solution for object detection and grasping in indoor environments. The framework of the system, encompassing geometrical constraints, robot control theories, and the hardware platform, is described. The proposed method, covering calibration to visual estimation, is detailed for handling the detection and grasping task. Our approach's efficiency, feasibility, and applicability are evident from the results of both theoretical simulations and experiments

    Quality of Service Aware Orchestration for Cloud-Edge Continuum Applications

    Get PDF
    The fast growth in the amount of connected devices with computing capabilities in the past years has enabled the emergence of a new computing layer at the Edge. Despite being resource-constrained if compared with cloud servers, they offer lower latencies than those achievable by Cloud computing. The combination of both Cloud and Edge computing paradigms can provide a suitable infrastructure for complex applications’ quality of service requirements that cannot easily be achieved with either of these paradigms alone. These requirements can be very different for each application, from achieving time sensitivity or assuring data privacy to storing and processing large amounts of data. Therefore, orchestrating these applications in the Cloud–Edge computing raises new challenges that need to be solved in order to fully take advantage of this layered infrastructure. This paper proposes an architecture that enables the dynamic orchestration of applications in the Cloud–Edge continuum. It focuses on the application’s quality of service by providing the scheduler with input that is commonly used by modern scheduling algorithms. The architecture uses a distributed scheduling approach that can be customized in a per-application basis, which ensures that it can scale properly even in setups with high number of nodes and complex scheduling algorithms. This architecture has been implemented on top of Kubernetes and evaluated in order to asses its viability to enable more complex scheduling algorithms that take into account the quality of service of applications.This work has been financially supported by the European Commission through the ELASTIC project (H2020 grant agreement 825473), by the Spanish Ministry of Science, Innovation and Universities (project RTI2018-096116-B-I00 (MCIU/AEI/FEDER, UE)), and by the Basque Government through the Qualyfamm project (Elkartek KK-2020/00042). It has also been financed by the Basque Government under Grant IT1324-19

    An Adaptive Task Scheduling in Fog Computing

    Get PDF
    Internet applications generate massive amount of data. For processing the data, it is transmitted to cloud. Time-sensitive applications require faster access. However, the limitation with the cloud is the connectivity with the end devices. Fog was developed by Cisco to overcome this limitation. Fog has better connectivity with the end devices, with some limitations. Fog works as intermediate layer between the end devices and the cloud. When providing the quality of service to end users, scheduling plays an important role. Scheduling a task based on the end users requirement is a tedious thing. In this paper, we proposed a cloud-fog task scheduling model, which provides quality of service to end devices with proper security

    Task Scheduling Based on a Hybrid Heuristic Algorithm for Smart Production Line with Fog Computing

    No full text
    Fog computing provides computation, storage and network services for smart manufacturing. However, in a smart factory, the task requests, terminal devices and fog nodes have very strong heterogeneity, such as the different task characteristics of terminal equipment: fault detection tasks have high real-time demands; production scheduling tasks require a large amount of calculation; inventory management tasks require a vast amount of storage space, and so on. In addition, the fog nodes have different processing abilities, such that strong fog nodes with considerable computing resources can help terminal equipment to complete the complex task processing, such as manufacturing inspection, fault detection, state analysis of devices, and so on. In this setting, a new problem has appeared, that is, determining how to perform task scheduling among the different fog nodes to minimize the delay and energy consumption as well as improve the smart manufacturing performance metrics, such as production efficiency, product quality and equipment utilization rate. Therefore, this paper studies the task scheduling strategy in the fog computing scenario. A task scheduling strategy based on a hybrid heuristic (HH) algorithm is proposed that mainly solves the problem of terminal devices with limited computing resources and high energy consumption and makes the scheme feasible for real-time and efficient processing tasks of terminal devices. Finally, the experimental results show that the proposed strategy achieves superior performance compared to other strategies

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments
    corecore