531 research outputs found

    Efficient Resources Provisioning Based on Load Forecasting in Cloud

    Get PDF
    Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate resources in a fine-grained mode according to application’s actual resources demand. The necessary precondition of this strategy is obtaining future load information in advance. We propose a multi-step-ahead load forecasting method, KSwSVR, based on statistical learning theory which is suitable for the complex and dynamic characteristics of the cloud computing environment. It integrates an improved support vector regression algorithm and Kalman smoother. Public trace data taken from multitypes of resources were used to verify its prediction accuracy, stability, and adaptability, comparing with AR, BPNN, and standard SVR. Subsequently, based on the predicted results, a simple and efficient strategy is proposed for resource provisioning. CPU allocation experiment indicated it can effectively reduce resources consumption while meeting service level agreements requirements

    Kalman filter based prediction and forecasting of cloud server KPIs

    Get PDF
    Cloud computing depends on the dynamic allocation and release of resources, on demand, to meet heterogeneous computing needs. This is challenging for cloud data centers, which process huge amounts of data characterised by its high volume, velocity, variety and veracity (4Vs model). Managing such a workload is increasingly difficult using state-of-the-art methods for monitoring and adaptation, which typically react to service failures after the fact. To address this, we seek to develop proactive methods for predicting future resource exhaustion and cloud service failures. Our work uses a realistic test bed in the cloud, which is instrumented to monitor and analyze resource usage. In this paper, we employed the optimal Kalman filtering technique to build a predictive and analytic framework for cloud server KPIs, based on historical data. Our k-step-ahead predictions on historical data yielded a prediction accuracy of 95.59%. The information generated from the framework can best be used for optimal resources provisioning, admission control and cloud SLA management

    Nonparametric Identification of nonlinear dynamic Systems

    Get PDF
    In der vorliegenden Arbeit wird eine nichtparametrische Identifikationsmethode für stark nichtlineare Systeme entwickelt, welche in der Lage ist, die Nichtlinearitäten basierend auf Schwingungsmessungen in Form von allgemeinen dreidimensionalen Rückstellkraft-Flächen zu rekonstruieren ohne Vorkenntnisse über deren funktionale Form. Die Vorgehensweise basiert auf nichtlinearen Kalman Filter Algorithmen, welche durch Ergänzung des Zustandsvektors in Parameterschätzer verwandelt werden können. In dieser Arbeit wird eine Methode beschrieben, die diese bekannte parametrische Lösung zu einem nichtparametrischen Verfahren weiterentwickelt. Dafür wird ein allgemeines Nichtlinearitätsmodell eingeführt, welches die Rückstellkräfte durch zeitvariable Koeffizienten der Zustandsvariablen beschreibt, die als zusätzliche Zustandsgrößen geschätzt werden. Aufgrund der probabilistischen Formulierung der Methode, können trotz signifikantem Messrauschen störfreie Rückstellkraft-Charakteristiken identifiziert werden. Durch den Kalman Filter Algorithmus ist die Beobachtbarkeit der Nichtlinearitäten bereits durch eine Messgröße pro Systemfreiheitsgrad gegeben. Außerdem ermöglicht diese Beschreibung die Durchführung einer vollständigen Identifikation, wobei die restlichen konstanten Parameter des Systems zusätzlich geschätzt werden. Die Leistungsfähigkeit des entwickelten Verfahrens wird anhand von virtuellen und realen Identifikationsbeispielen nichtlinearer mechanischen Systeme mit ein und drei Freiheitsgraden demonstriert

    Robust and automatic data cleansing method for short-term load forecasting of distribution feeders

    Get PDF
    Distribution networks are undergoing fundamental changes at medium voltage level. To support growing planning and control decision-making, the need for large numbers of short-term load forecasts has emerged. Data-driven modelling of medium voltage feeders can be affected by (1) data quality issues, namely, large gross errors and missing observations (2) the presence of structural breaks in the data due to occasional network reconfiguration and load transfers. The present work investigates and reports on the effects of advanced data cleansing techniques on forecast accuracy. A hybrid framework to detect and remove outliers in large datasets is proposed; this automatic procedure combines the Tukey labelling rule and the binary segmentation algorithm to cleanse data more efficiently, it is fast and easy to implement. Various approaches for missing value imputation are investigated, including unconditional mean, Hot Deck via k-nearest neighbour and Kalman smoothing. A combination of the automatic detection/removal of outliers and the imputation methods mentioned above are implemented to cleanse time series of 342 medium-voltage feeders. A nested rolling-origin-validation technique is used to evaluate the feed-forward deep neural network models. The proposed data cleansing framework efficiently removes outliers from the data, and the accuracy of forecasts is improved. It is found that Hot Deck (k-NN) imputation performs best in balancing the bias-variance trade-off for short-term forecasting

    Nonparametric identification of nonlinear dynamic systems

    Get PDF
    A nonparametric identification method for highly nonlinear systems is presented that is able to reconstruct the underlying nonlinearities without a priori knowledge of the describing nonlinear functions. The approach is based on nonlinear Kalman Filter algorithms using the well-known state augmentation technique that turns the filter into a dual state and parameter estimator, of which an extension towards nonparametric identification is proposed in the present work

    Smart Monitoring and Control in the Future Internet of Things

    Get PDF
    The Internet of Things (IoT) and related technologies have the promise of realizing pervasive and smart applications which, in turn, have the potential of improving the quality of life of people living in a connected world. According to the IoT vision, all things can cooperate amongst themselves and be managed from anywhere via the Internet, allowing tight integration between the physical and cyber worlds and thus improving efficiency, promoting usability, and opening up new application opportunities. Nowadays, IoT technologies have successfully been exploited in several domains, providing both social and economic benefits. The realization of the full potential of the next generation of the Internet of Things still needs further research efforts concerning, for instance, the identification of new architectures, methodologies, and infrastructures dealing with distributed and decentralized IoT systems; the integration of IoT with cognitive and social capabilities; the enhancement of the sensing–analysis–control cycle; the integration of consciousness and awareness in IoT environments; and the design of new algorithms and techniques for managing IoT big data. This Special Issue is devoted to advancements in technologies, methodologies, and applications for IoT, together with emerging standards and research topics which would lead to realization of the future Internet of Things

    A Study of Adaptation Mechanisms for Simulation Algorithms

    Get PDF
    The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data
    • …
    corecore