383 research outputs found

    Optimum resource allocation in optical wireless systems with energy-efficient fog and cloud architectures

    Get PDF
    Optical wireless communication (OWC) is a promising technology that can provide high data rates while supporting multiple users. The optical wireless (OW) physical layer has been researched extensively, however, less work was devoted to multiple access and how the OW front end is connected to the network. In this paper, an OWC system which employs a wavelength division multiple access (WDMA) scheme is studied, for the purpose of supporting multiple users. In addition, a cloud/fog architecture is proposed for the first time for OWC to provide processing capabilities. The cloud/fog-integrated architecture uses visible indoor light to create high data rate connections with potential mobile nodes. These OW nodes are further clustered and used as fog mini servers to provide processing services through the OW channel for other users. Additional fog-processing units are located in the room, the building, the campus and at the metro level. Further processing capabilities are provided by remote cloud sites. Two mixed-integer linear programming (MILP) models were proposed to numerically study networking and processing in OW systems. The first MILP model was developed and used to optimize resource allocation in the indoor OWC systems, in particular, the allocation of access points (APs) and wavelengths to users, while the second MILP model was developed to optimize the placement of processing tasks in the different fog and cloud nodes available. The optimization of tasks placement in the cloud/fog-integrated architecture was analysed using the MILP models. Multiple scenarios were considered where the mobile node locations were varied in the room and the amount of processing and data rate requested by each OW node was varied. The results help to identify the optimum colour and AP to use for communication for a given mobile node location and OWC system configuration, the optimum location to place processing and the impact of the network architecture

    Joint Service Caching and Task Offloading for Mobile Edge Computing in Dense Networks

    Full text link
    Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the network edge, thereby meeting the latency requirements of many emerging mobile applications and saving backhaul network bandwidth. Although many existing works have studied computation offloading policies, service caching is an equally, if not more important, design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related databases/libraries in the edge server (e.g. MEC-enabled BS), thereby enabling corresponding computation tasks to be executed. Because only a small number of application services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the edge computing performance. In this paper, we investigate the extremely compelling but much less studied problem of dynamic service caching in MEC-enabled dense cellular networks. We propose an efficient online algorithm, called OREO, which jointly optimizes dynamic service caching and task offloading to address a number of key challenges in MEC systems, including service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination. Our algorithm is developed based on Lyapunov optimization and Gibbs sampling, works online without requiring future information, and achieves provable close-to-optimal performance. Simulation results show that our algorithm can effectively reduce computation latency for end users while keeping energy consumption low

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks
    • …
    corecore