105,770 research outputs found

    Cross-layer Dynamic Admission Control for Cloud-based Multimedia Sensor Networks

    Full text link
    Publisher copyright and source must be acknowledged with citation. Must link to publisher version with DOICloud-based communications system is now widely used in many application fields such as medicine, security, environment protection, etc. Its use is being extended to the most demanding services like multimedia delivery. However, there are a lot of constraints when cloud-based sensor networks use the standard IEEE 802.15.3 or IEEE 802.15.4 technologies. This paper proposes a channel characterization scheme combined to a cross-layer admission control in dynamic cloud-based multimedia sensor networks to share the network resources among any two nodes. The analysis shows the behavior of two nodes using different network access technologies and the channel effects for each technology. Moreover, the existence of optimal node arrival rates in order to improve the usage of dynamic admission control when network resources are used is also shown. An extensive simulation study was performed to evaluate and validate the efficiency of the proposed dynamic admission control for cloud-based multimedia sensor networks.This work has been supported in part by Instituto de Telecomunicacoes, Next Generation Networks and Applications Group (NetGNA), Portugal, and in part by National Funding from the Fundacao para a Ciencia e Tecnologia through the Pest-OE/EEI/LA0008/2011.Mendes, LDP.; Rodrigues, JJPC.; Lloret, J.; Sendra Compte, S. (2014). Cross-layer Dynamic Admission Control for Cloud-based Multimedia Sensor Networks. IEEE Systems Journal. 8(1):235-246. doi:10.1109/JSYST.2013.2260653S2352468

    Cloud-assisted Distributed Nonlinear Optimal Control for Dynamics over Graph

    Get PDF
    Dynamics over graph are large-scale systems in which the dynamic coupling among subsystems is modeled by a graph. Examples arise in spatially distributed systems (as discretized PDEs), multi-agent control systems or social dynamics. In this paper, we propose a cloud-assisted distributed algorithm to solve optimal control problems for nonlinear dynamics over graph. Inspired by the centralized Hauser's projection operator approach for optimal control, our main contribution is the design of a descent method in which at each step agents of a network compute a local descent direction, and then obtain a new system trajectory through a distributed feedback controller. Such a controller, iteratively designed by a cloud, allows agents of the network to use only information from neighboring agents, thus resulting into a distributed projection operator over graph. The main advantages of our globally convergent algorithm are dynamic feasibility at each iteration and numerical robustness (thanks to the closed-loop updates) even for unstable dynamics. In order to show the effectiveness of our strategy, we present numerical computations on a discretized model of the Burgers\u2019 nonlinear partial differential equation

    Reinforcement learning on computational resource allocation of cloud-based wireless networks

    Get PDF
    Wireless networks used for Internet of Things (IoT) are expected to largely involve cloud-based computing and processing. Softwarised and centralised signal processing and network switching in the cloud enables flexible network control and management. In a cloud environment, dynamic computational resource allocation is essential to save energy while maintaining the performance of the processes. The stochastic features of the Central Processing Unit (CPU) load variation as well as the possible complex parallelisation situations of the cloud processes makes the dynamic resource allocation an interesting research challenge. This paper models this dynamic computational resource allocation problem into a Markov Decision Process (MDP) and designs a model-based reinforcement learning agent to optimise the dynamic resource allocation of the CPU usage. Value iteration method is used for the reinforcement learning agent to pick up the optimal policy during the MDP. To evaluate our performance we analyse two types of processes that can be used in the cloud-based IoT networks with different levels of parallelisation capabilities, i.e., Software-Defined Radio (SDR) and Software-Defined Networking (SDN). The results show that our agent rapidly converges to the optimal policy, stably performs in different parameter settings, outperforms or at least equally performs compared to a baseline algorithm in energy savings for different scenarios

    Dynamic Cloud Network Control under Reconfiguration Delay and Cost

    Full text link
    Network virtualization and programmability allow operators to deploy a wide range of services over a common physical infrastructure and elastically allocate cloud and network resources according to changing requirements. While the elastic reconfiguration of virtual resources enables dynamically scaling capacity in order to support service demands with minimal operational cost, reconfiguration operations make resources unavailable during a given time period and may incur additional cost. In this paper, we address the dynamic cloud network control problem under non-negligible reconfiguration delay and cost. We show that while the capacity region remains unchanged regardless of the reconfiguration delay/cost values, a reconfiguration-agnostic policy may fail to guarantee throughput-optimality and minimum cost under nonzero reconfiguration delay/cost. We then present an adaptive dynamic cloud network control policy that allows network nodes to make local flow scheduling and resource allocation decisions while controlling the frequency of reconfiguration in order to support any input rate in the capacity region and achieve arbitrarily close to minimum cost for any finite reconfiguration delay/cost values.Comment: 15 pages, 7 figure
    corecore