5 research outputs found

    Site Sonar-A Flexible and Extensible Infrastructure Monitoring Tool for ALICE Computing Grid

    Get PDF
    The ALICE experiment at the CERN Large Hadron Collider relies on a massive, distributed Computing Grid for its data processing. The ALICE Computing Grid is built by combining a large number of individual computing sites distributed globally. These Grid sites are maintained by different institutions across the world and contribute thousands of worker nodes possessing different capabilities and configurations. Developing software for Grid operations that works on all nodes while harnessing the maximum capabilities offered by any given Grid site is challenging without advance knowledge of what capabilities each site offers. Site Sonar is an architecture-independent Grid infrastructure monitoring framework developed by the ALICE Grid team to monitor the infrastructure capabilities and configurations of worker nodes at sites across the ALICE Grid without the need to contact local site administrators. Site Sonar is a highly flexible and extensible framework that offers infrastructure metric collection without local agent installations at Grid sites. This paper introduces the Site Sonar Grid infrastructure monitoring framework and reports significant findings acquired about the ALICE Computing Grid using Site Sonar

    Distributed data stream processing and task placement on edge-cloud infrastructure

    Get PDF
    © 2021 Gayashan Niroshana AmarasingheIndubitable growth of smart and connected edge devices with substantial processing power has made ubiquitous computing possible. These edge devices either produce streams of information related to the environment in which they are deployed or the devices can be located in proximity to such information producers. Distributed Data Stream Processing is a programming paradigm that is introduced to process these event streams to acquire relevant insights in order to make informed decisions. While deploying data stream processing frameworks on distributed cloud infrastructure has been the convention, for latency critical real-time applications that rely on data streams produced outside the cloud on the edge devices, the communication overhead between the cloud and the edge is detrimental. The privacy concerns surrounding where the data streams are processed is also contributing to the move towards utilisation of the edge devices for processing user-specific data. The emergence of Edge Computing has helped to mitigate these challenges by enabling to execute processes on edge devices to utilise their unused potential. Distributed data stream processing that shares edge and cloud computing infrastructure is a nascent field which we believe to have many practical applications in the real world such as federated learning, augmented/virtual reality and healthcare applications. In this thesis, we investigate novel modelling techniques and solutions for sharing the workload of distributed data stream processing applications that utilise edge and cloud computing infrastructure. The outcome of this study is a series of research works that emanates from a comprehensive model and a simulation framework developed using this model, which we utilise to develop workload sharing strategies that consider the intrinsic characteristics of data stream processing applications executed on edge and cloud resources. First, we focus on developing a comprehensive model for representing the inherent characteristics of data stream processing applications such as the event generation rate and the distribution of even sizes at the sources, the selectivity and productivity distribution at the operators, placement of tasks onto the resources, and recording the metrics such as end-to-end latency, processing latency, networking latency and the power consumption. We also incorporate the processing, networking, power consumption, and curating characteristics of edge and cloud computing infrastructure to the model from the perspective of data stream processing. Based on our model, we develop a simulation tool, which we call ECSNeT++, and verify its accuracy by comparing the latency and power consumption metrics acquired from the calibrated simulator and a real test-bed, both of which execute identical applications. We show that ECSNeT++ can model a real deployment, with proper calibration. With the public availability of ECSNeT++ as an open source software, and the verified accuracy of our results, ECSNeT++ can be used effectively for predicting the behaviour and performance of stream processing applications running on large scale, heterogeneous edge and cloud computing infrastructure. Next, we investigate how to optimally share the application workload between the edge and cloud computing resources while upholding quality of service requirements. A typical data stream processing application is formed as a directed acyclic graph of tasks that consist of sources that generate events, operators that process incoming events and sinks that act as destinations for event streams. In order to share the workload of such an application, these tasks need to placed onto the available computing resources. To this end, we devise an optimisation framework, consisting of a constraint satisfaction formulation and a system model, that aims to minimise end-to-end latency through appropriate placement of tasks either on cloud or edge devices. We test our optimisation framework using ECSNeT++, with realistic topologies and calibration, and show that compared to edge-only and cloud-only placements, our framework is capable of achieving 8-14% latency reduction and 14-15% energy reduction when compared to the conventional cloud only placement, and 14-16% latency reduction when compared to a naive edge only placement while also reducing the energy consumption per event by 1-5%. Finally, in order to cater the multitude of applications that operate under dynamic conditions, we propose a semi-dynamic task switching methodology that can be applied to optimise end-to-end latency of the application. Here, we approach the task placement problem for changing environment conditions in two phases: in the first phase respective locally optimal task placements are acquired for discrete environment conditions which are then fed to the second phase, where the problem is modelled as an Infinite Horizon Markov Decision Process with discounted rewards. By solving this problem, an optimal policy can be obtained and we show that this optimal policy can improve the performance of distributed data stream processing applications when compared with a dynamic greedy task placement approach as well as static task placement. For real-world applications executed on ECSNeT++, our approach can improve the latency as much as 10 - 17% on average when compared to a fully dynamic greedy approach

    A Data Stream Processing Optimisation Framework for Edge Computing Applications

    Get PDF
    International audienceData Stream Processing (DSP) is a widely used programming paradigm to process an unbounded event stream. Often, DSP frameworks are deployed on the cloud with a scalable resource model. One of the key requirements of DSP is to produce results with low latency. With the emergence of IoT, many event sources have been located outside the cloud which can result in higher end-to-end latency due to communication overhead. However, due to the abundance of resources at the IoT layer, Edge computing has emerged as a viable computational paradigm. In this paper, we devise an optimisation framework, consisting of a constraint satisfaction formulation and a system model, that aims to minimise end-to-end latency through appropriate placement of DSP operators either on cloud nodes or edge devices, i.e. deployed in an edge-cloud integrated environment. We test our optimisation framework using OMNeT++, with realistic topologies and power consumption data, and show that it is capable of achieving ≈ 1.65 times reduction of latency compared to edge-only and cloud-only placements, which in turn also reduces the energy consumption per event by up to ≈ 4% at the edge layer. To the best of our knowledge our optimisation framework is the first of its kind to integrate power, bandwidth and CPU constraints with latency minimisation

    A set of essentials for online learning: CSE-SET

    No full text
    Abstract Distance learning is not a novel concept. Education or learning conducted online is a form of distance education. Online learning presents a convenient alternative to traditional learning. Numerous researchers have investigated the usage of online education in educational institutions and across nations. A set of essentials for effective online learning are elaborated in this study to ensure stakeholders would not get demotivated in the online learning process. Also, the study lists a set of factors that motivate students and other stakeholders to engage in online learning with enthusiasm and work towards online learning

    ECSNeT++ : A simulator for distributed stream processing on edge and cloud environments

    No full text
    International audienceThe objective of Internet of Things (IoT) is ubiquitous computing. As a result many computing enabled, connected devices are deployed in various environments, where these devices keep generating unbounded event streams related to the deployed environment. The common paradigm is to process these event streams at the cloud using the available Distributed Stream Processing (DSP) frameworks. However, with the emergence of Edge Computing, another convenient computing paradigm has been presented for executing such applications. Edge computing introduces the concept of using the underutilised potential of a large number of computing enabled connected devices such as IoT, located outside the cloud. In order to develop optimal strategies to utilise this vast number of potential resources, a realistic test bed is required. However, due to the overwhelming scale and heterogeneity of the edge computing device deployments, the amount of effort and investment required to set up such an environment is high. Therefore, a realistic simulation environment that can accurately predict the behaviour and performance of a large-scale, real deployment is extremely valuable. While the state-of-the-art simulation tools consider different aspects of executing applications on edge or cloud computing environments, we found that no simulator considers all the key characteristics to perform a realistic simulation of the execution of DSP applications on edge and cloud computing environments. To the best of our knowledge, the publicly available simulators lack being verified against real world experimental measurements, i.e. for calibration and to obtain accurate estimates of e.g. latency and power consumption. In this paper, we present our ECSNeT++ simulation toolkit which has been verified using real world experimental measurements for executing DSP applications on edge and cloud computing environments. ECSNeT++ models deployment and processing of DSP applications on edge-cloud environments and is built on top of OMNeT++/INET. By using multiple configurations of two real DSP applications, we show that ECSNeT++ is able to model a real deployment, with proper calibration. We believe that with the public availability of ECSNeT++ as an open source framework, and the verified accuracy of our results, ECSNeT++ can be used effectively for predicting the behaviour and performance of DSP applications running on large scale, heterogeneous edge and cloud computing device deployments
    corecore