407 research outputs found

    Engineering Self-Adaptive Applications on Software Defined Infrastructure

    Get PDF
    Cloud computing is a flexible platform that offers faster innovation, elastic resources, and economies of scale. However, it is challenging to ensure non-functional properties such as performance, cost and security of applications hosted in cloud. Applications should be adaptive to the fluctuating workload to meet the desired performance goals, in one hand, and on the other, operate in an economic manner to reduce the operational cost. Moreover, cloud applications are attractive target of security threats such as distributed denial of service attacks that target the availability of applications and increase the cost. Given such circumstances, it is vital to engineer applications that are able to self-adapt to such volatile conditions. In this thesis, we investigate techniques and mechanisms to engineer model-based application autonomic management systems that strive to meet performance, cost and security objectives of software systems running in cloud. In addition to using the elasticity feature of cloud, our proposed autonomic management systems employ run-time network adaptations using the emerging software defined networking and network function virtualization. We propose a novel approach to self-protecting applications where the application traffic is dynamically managed between public and private cloud depending on the condition of the traffic. Our management approach is able to adapt the bandwidth rates of application traffic to meet performance and cost objectives. Through run-time performance models as well as optimization, the management system maximizes the profit each time the application requires to adapt. Our autonomous management solutions are implemented and evaluated analytically as well as on multiple public and community clouds to demonstrate their applicability and effectiveness in real world environment

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Towards auto-scaling in the cloud: online resource allocation techniques

    Get PDF
    Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains. Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives

    An adaptive modelling infrastructure for context-aware mobile computing

    Get PDF
    Context provides information about the present status of people, places, things, network and devices in the environment. Context-awareness refers to the use of context information for an application to adapt its functionality to the current context of use. Development of context-aware applications is inherently complex. Previous researches on mobile computing emphasize on programmable interfaces for development of context-aware systems. There are limited researches that emphasize on the modelling aspects of adaptive applications. This research aims at developing a complete infrastructure for development of context-aware applications. The infrastructure consists of a middleware for context-aware application development that is supported by a set of context information modelling and reasoning facilities. It aims at extending the capabilities of context-aware middleware infrastructures by incorporating novel approaches to model context and situations under uncertainty. This thesis addresses the key challenges in context-aware computing by a complete infrastructure that aims at achieving the following: (1) support for fuzzy composition of high level context abstraction from low level detector context, and fuzzy-based inference mechanisms, (2) support for mobile services that can be dynamically composed and migrated with reference to adaptation requirements for different context situations, (3) support for modelling of adaptation components and entities

    Observing the clouds : a survey and taxonomy of cloud monitoring

    Get PDF
    This research was supported by a Royal Society Industry Fellowship and an Amazon Web Services (AWS) grant. Date of Acceptance: 10/12/2014Monitoring is an important aspect of designing and maintaining large-scale systems. Cloud computing presents a unique set of challenges to monitoring including: on-demand infrastructure, unprecedented scalability, rapid elasticity and performance uncertainty. There are a wide range of monitoring tools originating from cluster and high-performance computing, grid computing and enterprise computing, as well as a series of newer bespoke tools, which have been designed exclusively for cloud monitoring. These tools express a number of common elements and designs, which address the demands of cloud monitoring to various degrees. This paper performs an exhaustive survey of contemporary monitoring tools from which we derive a taxonomy, which examines how effectively existing tools and designs meet the challenges of cloud monitoring. We conclude by examining the socio-technical aspects of monitoring, and investigate the engineering challenges and practices behind implementing monitoring strategies for cloud computing.Publisher PDFPeer reviewe

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Software Defined Networking:Applicability and Service Possibilities

    Get PDF

    View on 5G Architecture: Version 1.0

    Get PDF
    The current white paper focuses on the produced results after one year research mainly from 16 projects working on the abovementioned domains. During several months, representatives from these projects have worked together to identify the key findings of their projects and capture the commonalities and also the different approaches and trends. Also they have worked to determine the challenges that remain to be overcome so as to meet the 5G requirements. The goal of 5G Architecture Working Group is to use the results captured in this white paper to assist the participating projects achieve a common reference framework. The work of this working group will continue during the following year so as to capture the latest results to be produced by the projects and further elaborate this reference framework. The 5G networks will be built around people and things and will natively meet the requirements of three groups of use cases: • Massive broadband (xMBB) that delivers gigabytes of bandwidth on demand • Massive machine-type communication (mMTC) that connects billions of sensors and machines • Critical machine-type communication (uMTC) that allows immediate feedback with high reliability and enables for example remote control over robots and autonomous driving. The demand for mobile broadband will continue to increase in the next years, largely driven by the need to deliver ultra-high definition video. However, 5G networks will also be the platform enabling growth in many industries, ranging from the IT industry to the automotive, manufacturing industries entertainment, etc. 5G will enable new applications like for example autonomous driving, remote control of robots and tactile applications, but these also bring a lot of challenges to the network. Some of these are related to provide low latency in the order of few milliseconds and high reliability compared to fixed lines. But the biggest challenge for 5G networks will be that the services to cater for a diverse set of services and their requirements. To achieve this, the goal for 5G networks will be to improve the flexibility in the architecture. The white paper is organized as follows. In section 2 we discuss the key business and technical requirements that drive the evolution of 4G networks into the 5G. In section 3 we provide the key points of the overall 5G architecture where as in section 4 we elaborate on the functional architecture. Different issues related to the physical deployment in the access, metro and core networks of the 5G network are discussed in section 5 while in section 6 we present software network enablers that are expected to play a significant role in the future networks. Section 7 presents potential impacts on standardization and section 8 concludes the white paper
    • …
    corecore