7,421 research outputs found

    ElfStore: A Resilient Data Storage Service for Federated Edge and Fog Resources

    Full text link
    Edge and fog computing have grown popular as IoT deployments become wide-spread. While application composition and scheduling on such resources are being explored, there exists a gap in a distributed data storage service on the edge and fog layer, instead depending solely on the cloud for data persistence. Such a service should reliably store and manage data on fog and edge devices, even in the presence of failures, and offer transparent discovery and access to data for use by edge computing applications. Here, we present Elfstore, a first-of-its-kind edge-local federated store for streams of data blocks. It uses reliable fog devices as a super-peer overlay to monitor the edge resources, offers federated metadata indexing using Bloom filters, locates data within 2-hops, and maintains approximate global statistics about the reliability and storage capacity of edges. Edges host the actual data blocks, and we use a unique differential replication scheme to select edges on which to replicate blocks, to guarantee a minimum reliability and to balance storage utilization. Our experiments on two IoT virtual deployments with 20 and 272 devices show that ElfStore has low overheads, is bound only by the network bandwidth, has scalable performance, and offers tunable resilience.Comment: 24 pages, 14 figures, To appear in IEEE International Conference on Web Services (ICWS), Milan, Italy, 201

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Addressing the Node Discovery Problem in Fog Computing

    Get PDF
    In recent years, the Internet of Things (IoT) has gained a lot of attention due to connecting various sensor devices with the cloud, in order to enable smart applications such as: smart traffic management, smart houses, and smart grids, among others. Due to the growing popularity of the IoT, the number of Internet-connected devices has increased significantly. As a result, these devices generate a huge amount of network traffic which may lead to bottlenecks, and eventually increase the communication latency with the cloud. To cope with such issues, a new computing paradigm has emerged, namely: fog computing. Fog computing enables computing that spans from the cloud to the edge of the network in order to distribute the computations of the IoT data, and to reduce the communication latency. However, fog computing is still in its infancy, and there are still related open problems. In this paper, we focus on the node discovery problem, i.e., how to add new compute nodes to a fog computing system. Moreover, we discuss how addressing this problem can have a positive impact on various aspects of fog computing, such as fault tolerance, resource heterogeneity, proximity awareness, and scalability. Finally, based on the experimental results that we produce by simulating various distributed compute nodes, we show how addressing the node discovery problem can improve the fault tolerance of a fog computing system

    MicroFog: A Framework for Scalable Placement of Microservices-based IoT Applications in Federated Fog Environments

    Full text link
    MicroService Architecture (MSA) is gaining rapid popularity for developing large-scale IoT applications for deployment within distributed and resource-constrained Fog computing environments. As a cloud-native application architecture, the true power of microservices comes from their loosely coupled, independently deployable and scalable nature, enabling distributed placement and dynamic composition across federated Fog and Cloud clusters. Thus, it is necessary to develop novel microservice placement algorithms that utilise these microservice characteristics to improve the performance of the applications. However, existing Fog computing frameworks lack support for integrating such placement policies due to their shortcomings in multiple areas, including MSA application placement and deployment across multi-fog multi-cloud environments, dynamic microservice composition across multiple distributed clusters, scalability of the framework, support for deploying heterogeneous microservice applications, etc. To this end, we design and implement MicroFog, a Fog computing framework providing a scalable, easy-to-configure control engine that executes placement algorithms and deploys applications across federated Fog environments. Furthermore, MicroFog provides a sufficient abstraction over container orchestration and dynamic microservice composition. The framework is evaluated using multiple use cases. The results demonstrate that MicroFog is a scalable, extensible and easy-to-configure framework that can integrate and evaluate novel placement policies for deploying microservice-based applications within multi-fog multi-cloud environments. We integrate multiple microservice placement policies to demonstrate MicroFog's ability to support horizontally scaled placement, thus reducing the application service response time up to 54%

    Development and Performance Evaluation of a Connected Vehicle Application Development Platform (CVDeP)

    Get PDF
    Connected vehicle (CV) application developers need a development platform to build, test and debug real-world CV applications, such as safety, mobility, and environmental applications, in edge-centric cyber-physical systems. Our study objective is to develop and evaluate a scalable and secure CV application development platform (CVDeP) that enables application developers to build, test and debug CV applications in realtime. CVDeP ensures that the functional requirements of the CV applications meet the corresponding requirements imposed by the specific applications. We evaluated the efficacy of CVDeP using two CV applications (one safety and one mobility application) and validated them through a field experiment at the Clemson University Connected Vehicle Testbed (CU-CVT). Analyses prove the efficacy of CVDeP, which satisfies the functional requirements (i.e., latency and throughput) of a CV application while maintaining scalability and security of the platform and applications

    Dynamic data placement and discovery in wide-area networks

    Get PDF
    The workloads of online services and applications such as social networks, sensor data platforms and web search engines have become increasingly global and dynamic, setting new challenges to providing users with low latency access to data. To achieve this, these services typically leverage a multi-site wide-area networked infrastructure. Data access latency in such an infrastructure depends on the network paths between users and data, which is determined by the data placement and discovery strategies. Current strategies are static, which offer low latencies upon deployment but worse performance under a dynamic workload. We propose dynamic data placement and discovery strategies for wide-area networked infrastructures, which adapt to the data access workload. We achieve this with data activity correlation (DAC), an application-agnostic approach for determining the correlations between data items based on access pattern similarities. By dynamically clustering data according to DAC, network traffic in clusters is kept local. We utilise DAC as a key component in reducing access latencies for two application scenarios, emphasising different aspects of the problem: The first scenario assumes the fixed placement of data at sites, and thus focusses on data discovery. This is the case for a global sensor discovery platform, which aims to provide low latency discovery of sensor metadata. We present a self-organising hierarchical infrastructure consisting of multiple DAC clusters, maintained with an online and distributed split-and-merge algorithm. This reduces the number of sites visited, and thus latency, during discovery for a variety of workloads. The second scenario focusses on data placement. This is the case for global online services that leverage a multi-data centre deployment to provide users with low latency access to data. We present a geo-dynamic partitioning middleware, which maintains DAC clusters with an online elastic partition algorithm. It supports the geo-aware placement of partitions across data centres according to the workload. This provides globally distributed users with low latency access to data for static and dynamic workloads.Open Acces

    SoK: Distributed Computing in ICN

    Full text link
    Information-Centric Networking (ICN), with its data-oriented operation and generally more powerful forwarding layer, provides an attractive platform for distributed computing. This paper provides a systematic overview and categorization of different distributed computing approaches in ICN encompassing fundamental design principles, frameworks and orchestration, protocols, enablers, and applications. We discuss current pain points in legacy distributed computing, attractive ICN features, and how different systems use them. This paper also provides a discussion of potential future work for distributed computing in ICN.Comment: 10 pages, 3 figures, 1 table. Accepted by ACM ICN 202
    • …
    corecore