215 research outputs found

    Towards a Benchmark for Fog Data Processing

    Full text link
    Fog data processing systems provide key abstractions to manage data and event processing in the geo-distributed and heterogeneous fog environment. The lack of standardized benchmarks for such systems, however, hinders their development and deployment, as different approaches cannot be compared quantitatively. Existing cloud data benchmarks are inadequate for fog computing, as their focus on workload specification ignores the tight integration of application and infrastructure inherent in fog computing. In this paper, we outline an approach to a fog-native data processing benchmark that combines workload specifications with infrastructure specifications. This holistic approach allows researchers and engineers to quantify how a software approach performs for a given workload on given infrastructure. Further, by basing our benchmark in a realistic IoT sensor network scenario, we can combine paradigms such as low-latency event processing, machine learning inference, and offline data analytics, and analyze the performance impact of their interplay in a fog data processing system

    Load Balancer using Whale-Earthworm Optimization for Efficient Resource Scheduling in the IoT-Fog-Cloud Framework

    Get PDF
    Cloud-Fog environment is useful in offering optimized services to customers in their daily routine tasks. With the exponential usage of IoT devices, a huge scale of data is generated. Different service providers use optimization scheduling approaches to optimally allocate the scarce resources in the Fog computing environment to meet job deadlines. This study introduces the Whale-EarthWorm Optimization method (WEOA), a powerful hybrid optimization method for improving resource management in the Cloud-Fog environment. Striking a balance between exploration and exploitation of these approaches is difficult, if only Earthworm or Whale optimization methods are used. Earthworm technique can result in inefficiency due to its investigations and additional overhead, whereas Whale algorithm, may leave scope for improvement in finding the optimal solutions using its exploitation.  This research introduces an efficient task allocation method as a novel load balancer. It leverages an enhanced exploration phase inspired by the Earthworm algorithm and an improved exploitation phase inspired by the Whale algorithm to manage the optimization process. It shows a notable performance enhancement, with a 6% reduction in response time, a 2% decrease in cost, and a 2% improvement in makespan over EEOA. Furthermore, when compared to other approaches like h-DEWOA, CSDEO, CSPSO, and BLEMO, the proposed method achieves remarkable results, with response time reductions of up to 82%, cost reductions of up to 75%, and makespan improvements of up to 80%

    Cooperative scheduling and load balancing techniques in fog and edge computing

    Get PDF
    Fog and Edge Computing are two models that reached maturity in the last decade. Today, they are two solid concepts and plenty of literature tried to develop them. Also corroborated by the development of technologies, like for example 5G, they can now be considered de facto standards when building low and ultra-low latency applications, privacy-oriented solutions, industry 4.0 and smart city infrastructures. The common trait of Fog and Edge computing environments regards their inherent distributed and heterogeneous nature where the multiple (Fog or Edge) nodes are able to interact with each other with the essential purpose of pre-processing data gathered by the uncountable number of sensors to which they are connected to, even by running significant ML models and relying upon specific processors (TPU). However, nodes are often placed in a geographic domain, like a smart city, and the dynamic of the traffic during the day may cause some nodes to be overwhelmed by requests while others instead may become completely idle. To achieve the optimal usage of the system and also to guarantee the best possible QoS across all the users connected to the Fog or Edge nodes, the need to design load balancing and scheduling algorithms arises. In particular, a reasonable solution is to enable nodes to cooperate. This capability represents the main objective of this thesis, which is the design of fully distributed algorithms and solutions whose purpose is the one of balancing the load across all the nodes, also by following, if possible, QoS requirements in terms of latency or imposing constraints in terms of power consumption when the nodes are powered by green energy sources. Unfortunately, when a central orchestrator is missing, a crucial element which makes the design of such algorithms difficult is that nodes need to know the state of the others in order to make the best possible scheduling decision. However, it is not possible to retrieve the state without introducing further latency during the service of the request. Furthermore, the retrieved information about the state is always old, and as a consequence, the decision is always relying on imprecise data. In this thesis, the problem is circumvented in two main ways. The first one considers randomised algorithms which avoid probing all of the neighbour nodes in favour of at maximum two nodes picked at random. This is proven to bring an exponential improvement in performance with respect to the probe of a single node. The second approach, instead, considers Reinforcement Learning as a technique for inferring the state of the other nodes thanks to the reward received by the agents when requests are forwarded. Moreover, the thesis will also focus on the energy aspect of the Edge devices. In particular, will be analysed a scenario of Green Edge Computing, where devices are powered only by Photovoltaic Panels and a scenario of mobile offloading targeting ML image inference applications. Lastly, a final glance will be given at a series of infrastructural studies, which will give the foundations for implementing the proposed algorithms on real devices, in particular, Single Board Computers (SBCs). There will be presented a structural scheme of a testbed of Raspberry Pi boards, and a fully-fledged framework called ``P2PFaaS'' which allows the implementation of load balancing and scheduling algorithms based on the Function-as-a-Service (FaaS) paradigm

    An Empirical Study of Inter-cluster Resource Orchestration within Federated Cloud Clusters

    Get PDF
    Federated clusters are composed of multiple independent clusters of machines interconnected by a resource management system, and possess several advantages over centralized cloud datacenter clusters including seamless provisioning of applications across large geographic regions, greater fault tolerance, and increased cluster resource utilization. However, while existing resource management systems for federated clusters are capable of improving application intra-cluster performance, they do not capture inter-cluster performance in their decision making. This is important given federated clusters must execute a wide variety of applications possessing heterogeneous system architectures, which are a impacted by unique inter-cluster performance conditions such as network latency and localized cluster resource contention. In this work we present an empirical study demonstrating how inter-cluster performance conditions negatively impact federated cluster orchestration systems. We conduct a series of micro-benchmarks under various cluster operational scenarios showing the critical importance in capturing inter-cluster performance for resource orchestration in federated clusters. From this benchmark, we determine precise limitations in existing federated orchestration, and highlight key insights to design future orchestration systems. Findings of notable interest entail different application types exhibiting innate performance affinities across various federated cluster operational conditions, and experience substantial performance degradation from even minor increases to latency (8.7x) and resource contention (12.0x) in comparison to centralized cluster architectures
    • …
    corecore