931 research outputs found

    Towards a proper service placement in combined Fog-to-Cloud (F2C) architectures

    Get PDF
    The Internet of Things (IoT) has empowered the development of a plethora of new services, fueled by the deployment of devices located at the edge, providing multiple capabilities in terms of connectivity as well as in data collection and processing. With the inception of the Fog Computing paradigm, aimed at diminishing the distance between edge-devices and the IT premises running IoT services, the perceived service latency and even the security risks can be reduced, while simultaneously optimizing the network usage. When put together, Fog and Cloud computing (recently coined as fog-to-cloud, F2C) can be used to maximize the advantages of future computer systems, with the whole greater than the sum of individual parts. However, the specifics associated with cloud and fog resource models require new strategies to manage the mapping of novel IoT services into the suitable resources. Despite few proposals for service offloading between fog and cloud systems are slowly gaining momentum in the research community, many issues in service placement, both when the service is ready to be executed admitted as well as when the service is offloaded from Cloud to Fog, and vice-versa, are new and largely unsolved. In this paper, we provide some insights into the relevant features about service placement in F2C scenarios, highlighting main challenges in current systems towards the deployment of the next-generation IoT servicesPostprint (author's final draft

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Game theory for cooperation in multi-access edge computing

    Get PDF
    Cooperative strategies amongst network players can improve network performance and spectrum utilization in future networking environments. Game Theory is very suitable for these emerging scenarios, since it models high-complex interactions among distributed decision makers. It also finds the more convenient management policies for the diverse players (e.g., content providers, cloud providers, edge providers, brokers, network providers, or users). These management policies optimize the performance of the overall network infrastructure with a fair utilization of their resources. This chapter discusses relevant theoretical models that enable cooperation amongst the players in distinct ways through, namely, pricing or reputation. In addition, the authors highlight open problems, such as the lack of proper models for dynamic and incomplete information scenarios. These upcoming scenarios are associated to computing and storage at the network edge, as well as, the deployment of large-scale IoT systems. The chapter finalizes by discussing a business model for future networks.info:eu-repo/semantics/acceptedVersio

    A Survey on Intrusion Detection Systems for Fog and Cloud Computing

    Get PDF
    The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there is an absolute need to differentiate boundaries in personal information and cloud and fog computing globally and the adoption of specific information security policies and regulations. The goal of the security policy and framework for cloud and fog computing is to protect the end-users and their information, reduce task-based operations, aid in compliance, and create standards for expected user actions, all of which are based on the use of established rules for cloud computing. Moreover, intrusion detection systems are widely adopted solutions to monitor and analyze network traffic and detect anomalies that can help identify ongoing adversarial activities, trigger alerts, and automatically block traffic from hostile sources. This survey paper analyzes factors, including the application of technologies and techniques, which can enable the deployment of security policy on fog and cloud computing successfully. The paper focuses on a Software-as-a-Service (SaaS) and intrusion detection, which provides an effective and resilient system structure for users and organizations. Our survey aims to provide a framework for a cloud and fog computing security policy, while addressing the required security tools, policies, and services, particularly for cloud and fog environments for organizational adoption. While developing the essential linkage between requirements, legal aspects, analyzing techniques and systems to reduce intrusion detection, we recommend the strategies for cloud and fog computing security policies. The paper develops structured guidelines for ways in which organizations can adopt and audit the security of their systems as security is an essential component of their systems and presents an agile current state-of-the-art review of intrusion detection systems and their principles. Functionalities and techniques for developing these defense mechanisms are considered, along with concrete products utilized in operational systems. Finally, we discuss evaluation criteria and open-ended challenges in this area

    Scheduling the Execution of Tasks at the Edge

    Get PDF
    The Internet of Things provides a huge infrastructure where numerous devices produce, collect and process data. These data are the basis for offering analytics to support novel applications. The processing of huge volumes of data is a demanding process, thus, the power of Cloud is already utilized. However, latency, privacy and the drawbacks of this centralized approach became the motivation for the emerge of edge computing. In edge computing, data could be processed at the edge of the network; at the IoT nodes to deliver immediate results. Due to the limited resources of IoT nodes, it is not possible to have a high number of demanding tasks locally executed to support applications. In this paper, we propose a scheme for selecting the most significant tasks to be executed at the edge while the remaining are transferred into the Cloud. Our distributed scheme focuses on mobile IoT nodes and provides a decision making mechanism and an optimization module for delivering the tasks that will be executed locally. We take into consideration multiple characteristics of tasks and optimize the final decision. With our mechanism, IoT nodes can be adapted to, possibly, unknown environments evolving their decision making. We evaluate the proposed scheme through a high number of simulations and give numerical results

    A Game-Theoretic Approach to Strategic Resource Allocation Mechanisms in Edge and Fog Computing

    Get PDF
    With the rapid growth of Internet of Things (IoT), cloud-centric application management raises questions related to quality of service for real-time applications. Fog and edge computing (FEC) provide a complement to the cloud by filling the gap between cloud and IoT. Resource management on multiple resources from distributed and administrative FEC nodes is a key challenge to ensure the quality of end-user’s experience. To improve resource utilisation and system performance, researchers have been proposed many fair allocation mechanisms for resource management. Dominant Resource Fairness (DRF), a resource allocation policy for multiple resource types, meets most of the required fair allocation characteristics. However, DRF is suitable for centralised resource allocation without considering the effects (or feedbacks) of large-scale distributed environments like multi-controller software defined networking (SDN). Nash bargaining from micro-economic theory or competitive equilibrium equal incomes (CEEI) are well suited to solving dynamic optimisation problems proposing to ‘proportionately’ share resources among distributed participants. Although CEEI’s decentralised policy guarantees load balancing for performance isolation, they are not faultproof for computation offloading. The thesis aims to propose a hybrid and fair allocation mechanism for rejuvenation of decentralised SDN controller deployment. We apply multi-agent reinforcement learning (MARL) with robustness against adversarial controllers to enable efficient priority scheduling for FEC. Motivated by software cybernetics and homeostasis, weighted DRF is generalised by applying the principles of feedback (positive or/and negative network effects) in reverse game theory (GT) to design hybrid scheduling schemes for joint multi-resource and multitask offloading/forwarding in FEC environments. In the first piece of study, monotonic scheduling for joint offloading at the federated edge is addressed by proposing truthful mechanism (algorithmic) to neutralise harmful negative and positive distributive bargain externalities respectively. The IP-DRF scheme is a MARL approach applying partition form game (PFG) to guarantee second-best Pareto optimality viii | P a g e (SBPO) in allocation of multi-resources from deterministic policy in both population and resource non-monotonicity settings. In the second study, we propose DFog-DRF scheme to address truthful fog scheduling with bottleneck fairness in fault-probable wireless hierarchical networks by applying constrained coalition formation (CCF) games to implement MARL. The multi-objective optimisation problem for fog throughput maximisation is solved via a constraint dimensionality reduction methodology using fairness constraints for efficient gateway and low-level controller’s placement. For evaluation, we develop an agent-based framework to implement fair allocation policies in distributed data centre environments. In empirical results, the deterministic policy of IP-DRF scheme provides SBPO and reduces the average execution and turnaround time by 19% and 11.52% as compared to the Nash bargaining or CEEI deterministic policy for 57,445 cloudlets in population non-monotonic settings. The processing cost of tasks shows significant improvement (6.89% and 9.03% for fixed and variable pricing) for the resource non-monotonic setting - using 38,000 cloudlets. The DFog-DRF scheme when benchmarked against asset fair (MIP) policy shows superior performance (less than 1% in time complexity) for up to 30 FEC nodes. Furthermore, empirical results using 210 mobiles and 420 applications prove the efficacy of our hybrid scheduling scheme for hierarchical clustering considering latency and network usage for throughput maximisation.Abubakar Tafawa Balewa University, Bauchi (Tetfund, Nigeria

    Middleware Technologies for Cloud of Things - a survey

    Get PDF
    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these issues, cloud computing emerged to IoT as Cloud of Things (CoT) which provides virtually unlimited cloud services to enhance the large scale IoT platforms. There are several factors to be considered in design and implementation of a CoT platform. One of the most important and challenging problems is the heterogeneity of different objects. This problem can be addressed by deploying suitable "Middleware". Middleware sits between things and applications that make a reliable platform for communication among things with different interfaces, operating systems, and architectures. The main aim of this paper is to study the middleware technologies for CoT. Toward this end, we first present the main features and characteristics of middlewares. Next we study different architecture styles and service domains. Then we presents several middlewares that are suitable for CoT based platforms and lastly a list of current challenges and issues in design of CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268, Digital Communications and Networks, Elsevier (2017
    • …
    corecore