452 research outputs found

    ENORM: A Framework For Edge NOde Resource Management

    Get PDF
    Current computing techniques using the cloud as a centralised server will become untenable as billions of devices get connected to the Internet. This raises the need for fog computing, which leverages computing at the edge of the network on nodes, such as routers, base stations and switches, along with the cloud. However, to realise fog computing the challenge of managing edge nodes will need to be addressed. This paper is motivated to address the resource management challenge. We develop the first framework to manage edge nodes, namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for provisioning and auto-scaling edge node resources are proposed. The feasibility of the framework is demonstrated on a PokeMon Go-like online game use-case. The benefits of using ENORM are observed by reduced application latency between 20% - 80% and reduced data transfer and communication frequency between the edge node and the cloud by up to 95\%. These results highlight the potential of fog computing for improving the quality of service and experience.Comment: 14 pages; accepted to IEEE Transactions on Services Computing on 12 September 201

    VIoLET: A Large-scale Virtual Environment for Internet of Things

    Full text link
    IoT deployments have been growing manifold, encompassing sensors, networks, edge, fog and cloud resources. Despite the intense interest from researchers and practitioners, most do not have access to large-scale IoT testbeds for validation. Simulation environments that allow analytical modeling are a poor substitute for evaluating software platforms or application workloads in realistic computing environments. Here, we propose VIoLET, a virtual environment for defining and launching large-scale IoT deployments within cloud VMs. It offers a declarative model to specify container-based compute resources that match the performance of the native edge, fog and cloud devices using Docker. These can be inter-connected by complex topologies on which private/public networks, and bandwidth and latency rules are enforced. Users can configure synthetic sensors for data generation on these devices as well. We validate VIoLET for deployments with > 400 devices and > 1500 device-cores, and show that the virtual IoT environment closely matches the expected compute and network performance at modest costs. This fills an important gap between IoT simulators and real deployments.Comment: To appear in the Proceedings of the 24TH International European Conference On Parallel and Distributed Computing (EURO-PAR), August 27-31, 2018, Turin, Italy, europar2018.org. Selected as a Distinguished Paper for presentation at the Plenary Session of the conferenc

    Distributed Computing Framework Based on Software Containers for Heterogeneous Embedded Devices

    Get PDF
    The Internet of Things (IoT) is represented by millions of everyday objects enhanced with sensing and actuation capabilities that are connected to the Internet. Traditional approaches for IoT applications involve sending data to cloud servers for processing and storage, and then relaying commands back to devices. However, this approach is no longer feasible due to the rapid growth of IoT in the network: the vast amount of devices causes congestion; latency and security requirements demand that data is processed close to the devices that produce and consume it; and the processing and storage resources of devices remain underutilized. Fog Computing has emerged as a new paradigm where multiple end-devices form a shared pool of resources where distributed applications are deployed, taking advantage of local capabilities. These devices are highly heterogeneous, with varying hardware and software platforms. They are also resource-constrained, with limited availability of processing and storage resources. Realizing the Fog requires a software framework that simplifies the deployment of distributed applications, while at the same time overcoming these constraints. In Cloud-based deployments, software containers provide a lightweight solution to simplify the deployment of distributed applications. However, Cloud hardware is mostly homogeneous and abundant in resources. This work establishes the feasibility of using Docker Swarm -- an existing container-based software framework -- for the deployment of distributed applications on IoT devices. This is realized with the use of custom tools to enable minimal-size applications compatible with heterogeneous devices; automatic configuration and formation of device Fog; remote management and provisioning of devices. The proposed framework has significant advantages over the state of the art, namely, it supports Fog-based distributed applications, it overcomes device heterogeneity and it simplifies device initialization

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges

    Full text link
    [EN] If last decade viewed computational services as a utility then surely this decade has transformed computation into a commodity. Computation is now progressively integrated into the physical networks in a seamless way that enables cyber-physical systems (CPS) and the Internet of Things (IoT) meet their latency requirements. Similar to the concept of ¿platform as a service¿ or ¿software as a service¿, both cloudlets and fog computing have found their own use cases. Edge devices (that we call end or user devices for disambiguation) play the role of personal computers, dedicated to a user and to a set of correlated applications. In this new scenario, the boundaries between the network node, the sensor, and the actuator are blurring, driven primarily by the computation power of IoT nodes like single board computers and the smartphones. The bigger data generated in this type of networks needs clever, scalable, and possibly decentralized computing solutions that can scale independently as required. Any node can be seen as part of a graph, with the capacity to serve as a computing or network router node, or both. Complex applications can possibly be distributed over this graph or network of nodes to improve the overall performance like the amount of data processed over time. In this paper, we identify this new computing paradigm that we call Social Dispersed Computing, analyzing key themes in it that includes a new outlook on its relation to agent based applications. We architect this new paradigm by providing supportive application examples that include next generation electrical energy distribution networks, next generation mobility services for transportation, and applications for distributed analysis and identification of non-recurring traffic congestion in cities. The paper analyzes the existing computing paradigms (e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity of their definitions; and analyzes and discusses the relevant foundational software technologies, the remaining challenges, and research opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029

    Leveraging Kubernetes in Edge-Native Cable Access Convergence

    Get PDF
    Public clouds provide infrastructure services and deployment frameworks for modern cloud-native applications. As the cloud-native paradigm has matured, containerization, orchestration and Kubernetes have become its fundamental building blocks. For the next step of cloud-native, an interest to extend it to the edge computing is emerging. Primary reasons for this are low-latency use cases and the desire to have uniformity in cloud-edge continuum. Cable access networks as specialized type of edge networks are not exception here. As the cable industry transitions to distributed architectures and plans the next steps to virtualize its on-premise network functions, there are opportunities to achieve synergy advantages from convergence of access technologies and services. Distributed cable networks deploy resource-constrained devices like RPDs and RMDs deep in the edge networks. These devices can be redesigned to support more than one access technology and to provide computing services for other edge tenants with MEC-like architectures. Both of these cases benefit from virtualization. It is here where cable access convergence and cloud-native transition to edge-native intersect. However, adapting cloud-native in the edge presents a challenge, since cloud-native container runtimes and native Kubernetes are not optimal solutions in diverse edge environments. Therefore, this thesis takes as its goal to describe current landscape of lightweight cloud-native runtimes and tools targeting the edge. While edge-native as a concept is taking its first steps, tools like KubeEdge, K3s and Virtual Kubelet can be seen as the most mature reference projects for edge-compatible solution types. Furthermore, as the container runtimes are not yet fully edge-ready, WebAssembly seems like a promising alternative runtime for lightweight, portable and secure Kubernetes compatible workloads
    corecore