1,346 research outputs found

    Microservices-based IoT Applications Scheduling in Edge and Fog Computing: A Taxonomy and Future Directions

    Full text link
    Edge and Fog computing paradigms utilise distributed, heterogeneous and resource-constrained devices at the edge of the network for efficient deployment of latency-critical and bandwidth-hungry IoT application services. Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up with the rapid development and deployment needs of the fast-evolving IoT applications. Due to the fine-grained modularity of the microservices along with their independently deployable and scalable nature, MSA exhibits great potential in harnessing both Fog and Cloud resources to meet diverse QoS requirements of the IoT application services, thus giving rise to novel paradigms like Osmotic computing. However, efficient and scalable scheduling algorithms are required to utilise the said characteristics of the MSA while overcoming novel challenges introduced by the architecture. To this end, we present a comprehensive taxonomy of recent literature on microservices-based IoT applications scheduling in Edge and Fog computing environments. Furthermore, we organise multiple taxonomies to capture the main aspects of the scheduling problem, analyse and classify related works, identify research gaps within each category, and discuss future research directions.Comment: 35 pages, 10 figures, submitted to ACM Computing Survey

    Developing applications in large scale, dynamic fog computing:A case study

    Get PDF
    In recent years, fog computing has emerged as a new distributed system model for a large class of applications that are data-intensive or delay-sensitive. By exploiting widely distributed computing infrastructure that is located closer to the network edge, communication cost and service response time can be significantly reduced. However, developing this class of applications is not straightforward and requires addressing three key challenges, ie, supporting the dynamic nature of the edge network, managing the context-dependent characteristics of application logic, and dealing with the large scale of the system. In this paper, we present a case study in building fog computing applications using our open source platform Distributed Node-RED (DNR). In particular, we show how applications can be decomposed and deployed to a geographically distributed infrastructure using DNR, and how existing software components can be adapted and reused to participate in fog applications. We present a lab-based implementation of a fog application built using DNR that addresses the first two of the issues highlighted earlier. To validate that our approach also deals with large scale, we augment our live trial with a large scale simulation of the application model, conducted in Omnet++, which shows the scalability of the model and how it supports the dynamic nature of fog applications. © 2019 John Wiley & Sons, Ltd

    Multi-Criteria Decision-Making Approach for Container-based Cloud Applications: The SWITCH and ENTICE Workbenches

    Get PDF
    Many emerging smart applications rely on the Internet of Things (IoT) to provide solutions to time-critical problems. When building such applications, a software engineer must address multiple Non-Functional Requirements (NFRs), including requirements for fast response time, low communication latency, high throughput, high energy efficiency, low operational cost and similar. Existing modern container-based software engineering approaches promise to improve the software lifecycle; however, they fail short of tools and mechanisms for NFRs management and optimisation. Our work addresses this problem with a new decision-making approach based on a Pareto Multi-Criteria optimisation. By using different instance configurations on various geo-locations, we demonstrate the suitability of our method, which narrows the search space to only optimal instances for the deployment of the containerised microservice.This solution is included in two advanced software engineering environments, the SWITCH workbench, which includes an Interactive Development Environment (IDE) and the ENTICE Virtual Machine and container images portal. The developed approach is particularly useful when building, deploying and orchestrating IoT applications across multiple computing tiers, from Edge-Cloudlet to Fog-Cloud data centres

    Modeling and simulation of data-driven applications in SDN-aware environments

    Get PDF
    PhD ThesisThe rising popularity of Software-Defined Networking (SDN) is increasing as it promises to offer a window of opportunity and new features in terms of network performance, configuration, and management. As such, SDN is exploited by several emerging applications and environments, such as cloud computing, edge computing, IoT, and data- driven applications. Although SDN has demonstrated significant improvements in industry, still little research has explored the embracing of SDN in the area of cross-layer optimization in different SDN-aware environments. Each application and computing environment require different functionalities and Quality of Service (QoS) requirements. For example, a typical MapReduce application would require data transmission at three different times while the data transmission of stream-based applications would be unknown due to uncertainty about the number of required tasks and dependencies among stream tasks. As such, the deployment of SDN with different applications are not identical, which require different deployment strategies and algorithms to meet different QoS requirements (e.g., high bandwidth, deadline). Further, each application and environment has unique architectures, which impose different form of complexity in terms of computing, storage, and network. Due to such complexities, finding optimal solutions for SDN-aware applications and environments become very challenging. Therefore, this thesis presents multilateral research towards optimization, modeling, and simulation of cross-layer optimization of SDN-aware applications and environments. Several tools and algorithms have been proposed, implemented, and evaluated, considering various environments and applications[1–4]. The main contributions of this thesis are as follows: • Proposing and modeling a new holistic framework that simulates MapReduce ap- plications, big data management systems (BDMS), and SDN-aware networks in cloud-based environments. Theoretical and mathematical models of MapReduce in SDN-aware cloud datacenters are also proposedThe government of Saudi Arabia represented by Saudi Electronic University (SEU) and the Royal Embassy of Saudi Arabia Cultural Burea

    Cloud-Edge Orchestration for the Internet-of-Things: Architecture and AI-Powered Data Processing

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe Internet-of-Things (IoT) has been deeply penetrated into a wide range of important and critical sectors, including smart city, water, transportation, manufacturing and smart factory. Massive data are being acquired from a fast growing number of IoT devices. Efficient data processing is a necessity to meet diversified and stringent requirements of many emerging IoT applications. Due to the constrained computation and storage resources, IoT devices have resorted to the powerful cloud computing to process their data. However, centralised and remote cloud computing may introduce unacceptable communication delay since its physical location is far away from IoT devices. Edge cloud has been introduced to overcome this issue by moving the cloud in closer proximity to IoT devices. The orchestration and cooperation between the cloud and the edge provides a crucial computing architecture for IoT applications. Artificial intelligence (AI) is a powerful tool to enable the intelligent orchestration in this architecture. This paper first introduces such a kind of computing architecture from the perspective of IoT applications. It then investigates the state-of-the-art proposals on AI-powered cloud-edge orchestration for the IoT. Finally, a list of potential research challenges and open issues is provided and discussed, which can provide useful resources for carrying out future research in this area.Engineering and Physical Sciences Research Council (EPSRC

    Monitoring in Hybrid Cloud-Edge Environments

    Get PDF
    The increasing number of mobile and IoT(Internet of Things) devices accessing cloud services contributes to a surge of requests towards the Cloud and consequently, higher latencies. This is aggravated by the possible congestion of the communication networks connecting the end devices and remote cloud datacenters, due to the large data volume generated at the Edge (e.g. in the domains of smart cities, smart cars, etc.). One solution for this problem is the creation of hybrid Cloud/Edge execution platforms composed of computational nodes located in the periphery of the system, near data producers and consumers, as a way to complement the cloud resources. These edge nodes offer computation and data storage resources to accommodate local services in order to ensure rapid responses to clients (enhancing the perceived quality of service) and to filter data, reducing the traffic volume towards the Cloud. Usually these nodes (e.g. ISP access points and onpremises servers) are heterogeneous, geographically distributed, and resource-restricted (including in communication networks), which increase their management’s complexity. At the application level, the microservices paradigm, represented by applications composed of small, loosely coupled services, offers an adequate and flexible solution to design applications that may explore the limited computational resources in the Edge. Nevertheless, the inherent difficult management of microservices within such complex infrastructure demands an agile and lightweight monitoring system that takes into account the Edge’s limitations, which goes behind traditional monitoring solutions at the Cloud. Monitoring in these new domains is not a simple process since it requires supporting the elasticity of the monitored system, the dynamic deployment of services and, moreover, doing so without overloading the infrastructure’s resources with its own computational requirements and generated data. Towards this goal, this dissertation presents an hybrid monitoring architecture where the heavier (resource-wise) components reside in the Cloud while the lighter (computationally less demanding) components reside in the Edge. The architecture provides relevant monitoring functionalities such as metrics’ acquisition, their analysis and mechanisms for real-time alerting. The objective is the efficient use of computational resources in the infrastructure while guaranteeing an agile delivery of monitoring data where and when it is needed.Tem-se vindo a verificar um aumento significativo de dispositivos móveis e do domínio IoT(Internet of Things) em áreas emergentes como Smart Cities, Smart Cars, etc., que fazem pedidos a serviços localizados normalmente na Cloud, muitas vezes a partir de locais remotos. Como consequência, prevê-se um aumento da latência no processamento destes pedidos, que poderá ser agravado pelo congestionamento dos canais de comunicação, da periferia até aos centros de dados. Uma forma de solucionar este problema passa pela criação de sistemas híbridos Cloud/Edge, compostos por nós computacionais que estão localizados na periferia do sistema, perto dos produtores e consumidores de dados, complementando assim os recursos computacionais da Cloud. Os nós da Edge permitem não só alojar dados e computações, garantindo uma resposta mais rápida aos clientes e uma melhor qualidade do serviço, como também permitem filtrar alguns dos dados, evitando deste modo transferências de dados desnecessárias para o núcleo do sistema. Contudo, muitos destes nós (e.g. pontos de acesso, servidores proprietários) têm uma capacidade limitada, são bastante heterogéneos e/ou encontram-se espalhados geograficamente, o que dificulta a gestão dos recursos. O paradigma de micro-serviços, representado por aplicações compostas por serviços de reduzida dimensão, desacoplados na sua funcionalidade e que comunicam por mensagens, fornece uma solução adequada para explorar os recursos computacionais na periferia. No entanto, o mapeamento adequado dos micro-serviços na infra-estrutura, além de ser complexo, é difícil de gerir e requer um sistema de monitorização ligeiro e ágil, que considere as capacidades limitadas da infra-estrutura de suporte na periferia. A monitorização não é um processo simples pois deve possibilitar a elasticidade do sistema, tendo em conta as adaptações de "deployment", e sem sobrecarregar os recursos computacionais ou de rede. Este trabalho apresenta uma arquitectura de monitorização híbrida, com componentes de maior complexidade na Cloud e componentes mais simples na Edge. A arquitectura fornece funcionalidades importantes de monitorização, como a recolha de métricas variadas, a sua análise e alertas em tempo real. O objetivo é rentabilizar os recursos computacionais garantindo a entrega dos dados mais relevantes quando necessário

    Orchestration from the cloud to the edge

    Get PDF
    The effective management of complex and heterogeneous computing environments is one of the biggest challenges that service and infrastructure providers are facing in the Cloud-to-Thing continuum era. Advanced orchestration systems are required to support the resource management of large-scale cloud data centres integrated into big data generation of IoT devices. The orchestration system should be aware of all available resources and their current status in order to perform dynamic allocations and enable short time deployment of applications. This chapter will review the state of the art with regards to orchestration along the Cloud-to-Thing continuum with a specific emphasis on container-based orchestration (e.g. Docker Swarm and Kubernetes) and fog-specific orchestration architectures (e.g. SORTS, SOAFI, ETSI IGS MEC, and CONCERT)

    International conference on science, technology, engineering and economy

    Get PDF
    corecore