583 research outputs found

    Uncertainty-driven Ensemble Forecasting of QoS in Software Defined Networks

    Get PDF
    Software Defined Networking (SDN) is the key technology for combining networking and Cloud solutions to provide novel applications. SDN offers a number of advantages as the existing resources can be virtualized and orchestrated to provide new services to the end users. Such a technology should be accompanied by powerful mechanisms that ensure the end-to-end quality of service at high levels, thus, enabling support for complex applications that satisfy end users needs. In this paper, we propose an intelligent mechanism that agglomerates the benefits of SDNs with real-time “Big Data” forecasting analytics. The proposed mechanism, as part of the SDN controller, supports predictive intelligence by monitoring a set of network performance parameters, forecasting their future values, and deriving indications on potential service quality violations. By treating the performance measurements as time-series, our mechanism employs a novel ensemble forecasting methodology to estimate their future values. Such predictions are fed to a Type-2 Fuzzy Logic system to deliver, in real-time, decisions related to service quality violations. Such decisions proactively assist the SDN controller for providing the best possible orchestration of the virtualized resources. We evaluate the proposed mechanism w.r.t. precision and recall metrics over synthetic data

    Uncertainty-driven Ensemble Forecasting of QoS in Software Defined Networks

    Get PDF
    Software Defined Networking (SDN) is the key technology for combining networking and Cloud solutions to provide novel applications. SDN offers a number of advantages as the existing resources can be virtualized and orchestrated to provide new services to the end users. Such a technology should be accompanied by powerful mechanisms that ensure the end-to-end quality of service at high levels, thus, enabling support for complex applications that satisfy end users needs. In this paper, we propose an intelligent mechanism that agglomerates the benefits of SDNs with real-time “Big Data” forecasting analytics. The proposed mechanism, as part of the SDN controller, supports predictive intelligence by monitoring a set of network performance parameters, forecasting their future values, and deriving indications on potential service quality violations. By treating the performance measurements as time-series, our mechanism employs a novel ensemble forecasting methodology to estimate their future values. Such predictions are fed to a Type-2 Fuzzy Logic system to deliver, in real-time, decisions related to service quality violations. Such decisions proactively assist the SDN controller for providing the best possible orchestration of the virtualized resources. We evaluate the proposed mechanism w.r.t. precision and recall metrics over synthetic data

    Quality of service based data-aware scheduling

    Get PDF
    Distributed supercomputers have been widely used for solving complex computational problems and modeling complex phenomena such as black holes, the environment, supply-chain economics, etc. In this work we analyze the use of these distributed supercomputers for time sensitive data-driven applications. We present the scheduling challenges involved in running deadline sensitive applications on shared distributed supercomputers running large parallel jobs and introduce a ``data-aware\u27\u27 scheduling paradigm that overcomes these challenges by making use of Quality of Service classes for running applications on shared resources. We evaluate the new data-aware scheduling paradigm using an event-driven hurricane simulation framework which attempts to run various simulations modeling storm surge, wave height, etc. in a timely fashion to be used by first responders and emergency officials. We further generalize the work and demonstrate with examples how data-aware computing can be used in other applications with similar requirements

    A review of the use of artificial intelligence methods in infrastructure systems

    Get PDF
    The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems

    The handbook of engineering self-aware and self-expressive systems

    Get PDF
    When faced with the task of designing and implementing a new self-aware and self-expressive computing system, researchers and practitioners need a set of guidelines on how to use the concepts and foundations developed in the Engineering Proprioception in Computing Systems (EPiCS) project. This report provides such guidelines on how to design self-aware and self-expressive computing systems in a principled way. We have documented different categories of self-awareness and self-expression level using architectural patterns. We have also documented common architectural primitives, their possible candidate techniques and attributes for architecting self-aware and self-expressive systems. Drawing on the knowledge obtained from the previous investigations, we proposed a pattern driven methodology for engineering self-aware and self-expressive systems to assist in utilising the patterns and primitives during design. The methodology contains detailed guidance to make decisions with respect to the possible design alternatives, providing a systematic way to build self-aware and self-expressive systems. Then, we qualitatively and quantitatively evaluated the methodology using two case studies. The results reveal that our pattern driven methodology covers the main aspects of engineering self-aware and self-expressive systems, and that the resulted systems perform significantly better than the non-self-aware systems

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    Self-adaptive and online QoS modeling for cloud-based software services

    Get PDF
    In the presence of scale, dynamism, uncertainty and elasticity, cloud software engineers faces several challenges when modeling Quality of Service (QoS) for cloud-based software services. These challenges can be best managed through self-adaptivity because engineers' intervention is difficult, if not impossible, given the dynamic and uncertain QoS sensitivity to the environment and control knobs in the cloud. This is especially true for the shared infrastructure of cloud, where unexpected interference can be caused by co-located software services running on the same virtual machine; and co-hosted virtual machines within the same physical machine. In this paper, we describe the related challenges and present a fully dynamic, self-adaptive and online QoS modeling approach, which grounds on sound information theory and machine learning algorithms, to create QoS model that is capable to predict the QoS value as output over time by using the information on environmental conditions, control knobs and interference as inputs. In particular, we report on in-depth analysis on the correlations of selected inputs to the accuracy of QoS model in cloud. To dynamically selects inputs to the models at runtime and tune accuracy, we design self-adaptive hybrid dual-learners that partition the possible inputs space into two sub-spaces, each of which applies different symmetric uncertainty based selection techniques; the results of sub-spaces are then combined. Subsequently, we propose the use of adaptive multi-learners for building the model. These learners simultaneously allow several learning algorithms to model the QoS function, permitting the capability for dynamically selecting the best model for prediction on the fly. We experimentally evaluate our models in the cloud environment using RUBiS benchmark and realistic FIFA 98 workload. The results show that our approach is more accurate and effective than state-of-the-art modelings
    corecore