1,334 research outputs found

    Forecasting Resource Usage in Cloud Environments Using Temporal Convolutional Networks

    Get PDF
    Background: Predicting resource usage in cloud environments is crucial for optimizing costs. While recurrent neural networks and time series techniques are commonly used for forecasting, their limitations, such as vanishing gradients and lack of memory retention, necessitate the use of convolutional networks for modeling sequential data. Objective: This research proposes a temporal convolutional network (TCN) to forecast CPU usage and memory consumption in cloud environments. TCNs utilize dilated convolutions to capture temporal dependencies and maintain a fixed-sized receptive field, enabling them to handle sequences of varying lengths and capture long-term dependencies. The performance of the TCN is compared with Long Short-Term Memory (LSTM) Networks, Gated Recurrent Unit (GRU) Networks, and Multilayer Perceptron (MLP). Dataset: The study employs the Google Cluster Workload Traces 2019 data, focusing on CPU and memory utilization ranging between 5% and 95% over a 24-hour period, extracted from the first ten days. Results: The TCN outperforms other methods in predicting both CPU usage and memory consumption. For CPU usage prediction, the TCN achieves lower error metrics, including Mean Squared Error (MSE) of 0.05, Root Mean Squared Error (RMSE) of 0.22, Mean Absolute Error (MAE) of 0.18, and Mean Absolute Percentage Error (MAPE) of 3.5%. The TCN also demonstrates higher forecast accuracy, with FA1 = 85%, FA5 = 95%, and FA10 = 98%. Similar performance improvements are observed for memory consumption prediction, with the TCN achieving lower error metrics and higher forecast accuracy compared to LSTM, GRU, and MLP. The TCN exhibits better computational efficiency in terms of training time, inference time, and memory usage. Conclusion: The proposed temporal convolutional network (TCN) demonstrates good performance in forecasting CPU usage and memory consumption in cloud environments compared to LSTM, GRU, and MLP. Since TCN\u27s can capture temporal dependencies and handle sequences of varying lengths makes it a promising approach for resource usage prediction and cost optimization in cloud computing

    Wireless channel load stress analysis using FPGAs at the edge

    Get PDF
    Abstract. One of the key usage scenarios of fifth generation (5G) and beyond networks is to provide mission critical, ultra-reliable and low latency communications (URLLC) targeting specific set of applications where low latency and highly reliable wireless links are of utmost importance. 5G and beyond applications that require URLLC links include industry automation, artificial intelligence based technological solutions, vehicle to vehicle communication and robotics enabled medical solutions. URLLC applications using wireless connectivity require that resource utilization, such as wireless channel utilization, does not exceed the levels above which performance can degrade. Real-time radio frequency (RF) data analytics at the wireless network edge can help to design proactive resource allocation solutions that can allocate more radio resources when a particular resource is forecasted to be under stress. Typically, real-time RF data analytics can require processing of hundreds of millions of streaming samples per second and hardware accelerated modules (such as FPGAs) are very well-suited for such processing tasks. We propose FPGA-accelerated real-time data analytics based resource stress forecasting method in this thesis. The proposed method is low in complexity and performs forecasting in real-time. We show its implementation on an FPGA of Xilinx Zynq-7000 series System on Chip (SoC) board using Vivado, Vivado HLS, SDK and MATLAB tools. The proposed method uses quantile estimation and can be used for forecasting a variety of resource utilization scenarios. As an example, in our thesis, we focus on forecasting stress in wireless channel utilization. We test the implemented algorithm with real wireless channel utilization data representing block maxima series. We compare the results from the implemented method against the results from a theoretical method where the generalized extreme value (GEV) theory is used to make forecasts on the considered block maxima data. We show that with high accuracy and low latency, the proposed algorithm can perform the forecasting of channel utilization stress

    Resource management for multimedia traffic over ATM broadband satellite networks

    Get PDF
    PhDAbstract not availabl

    Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections

    Get PDF
    The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections

    Decisive Routing and Admission Control According to Quality of Service Constraints

    Get PDF
    This research effort examines, models, and proposes options to enhance command and control for decision makers when applied to the communications network. My goal is to research the viability of combining three students’ past research efforts and expanding and enhancing those efforts. The area of this research is predicting a snapshot of the communications network, context-aware routing between network nodes, and Quality of Service-based routing optimization in order to create an intelligent routing protocol platform. It will consolidate efforts from an Intelligent Agent Based Framework to Maximize Information Utility by Captain John Pecarina, Dialable Cryptography for Wireless Networks by Major Marnita Eaddie, and Stochastic Estimation and Control of Queues within a Computer Network by Captain Nathan Stuckey. My research effort will create a framework that is greater than the sum of its individual parts. The framework will take predictions about the health of the network and will take the priority level of a commodity which needs to be routed, and then will utilize this information to intelligently route the commodity in such a way as to optimize the information flow of network traffic. Developing this framework will ensure that the forward commander and decision makers can make sound judgments at the right time using the most accurate information and on the proper communications network

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Assuring virtual network reliability and resilience

    Full text link
    A framework developed that uses reliability block diagrams and continuous-time Markov chains to model and analyse the reliability and availability of a Virtual Network Environment (VNE). In addition, to minimize the unpredicted failures and reduce the impact of failure on a virtual network, a dynamic solution proposed for detecting a failure before it occurs in the VNE. Moreover, to predict failure and establish a tolerable maintenance plan before failure occurs in the VNE, a failure prediction method for VNE can be used to minimise the unpredicted failures, reduce backup redundancy and maximise system performance
    • …
    corecore