81 research outputs found

    Generic Methods for Adaptive Management of Service Level Agreements in Cloud Computing

    Get PDF
    The adoption of cloud computing to build and deliver application services has been nothing less than phenomenal. Service oriented systems are being built using disparate sources composed of web services, replicable datastores, messaging, monitoring and analytics functions and more. Clouds augment these systems with advanced features such as high availability, customer affinity and autoscaling on a fair pay-per-use cost model. The challenge lies in using the utility paradigm of cloud beyond its current exploit. Major trends show that multi-domain synergies are creating added-value service propositions. This raises two questions on autonomic behaviors, which are specifically ad- dressed by this thesis. The first question deals with mechanism design that brings the customer and provider(s) together in the procurement process. The purpose is that considering customer requirements for quality of service and other non functional properties, service dependencies need to be efficiently resolved and legally stipulated. The second question deals with effective management of cloud infrastructures such that commitments to customers are fulfilled and the infrastructure is optimally operated in accordance with provider policies. This thesis finds motivation in Service Level Agreements (SLAs) to answer these questions. The role of SLAs is explored as instruments to build and maintain trust in an economy where services are increasingly interdependent. The thesis takes a wholesome approach and develops generic methods to automate SLA lifecycle management, by identifying and solving relevant research problems. The methods afford adaptiveness in changing business landscape and can be localized through policy based controls. A thematic vision that emerges from this work is that business models, services and the delivery technology are in- dependent concepts that can be finely knitted together by SLAs. Experimental evaluations support the message of this thesis, that exploiting SLAs as foundations for market innovation and infrastructure governance indeed holds win-win opportunities for both cloud customers and cloud providers

    Blueprint model and language for engineering cloud applications

    Get PDF
    Abstract: The research presented in this thesis is positioned within the domain of engineering CSBAs. Its contribution is twofold: (1) a uniform specification language, called the Blueprint Specification Language (BSL), for specifying cloud services across several cloud vendors and (2) a set of associated techniques, called the Blueprint Manipulation Techniques (BMTs), for publishing, querying, and composing cloud service specifications with aim to support the flexible design and configuration of an CSBA.

    Adaptive monitoring and control framework in Application Service Management environment

    Get PDF
    The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh execution time Service Level Agreements, such as real–time systems, or systems running under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work, the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures

    Workflow models for heterogeneous distributed systems

    Get PDF
    The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security. Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow. The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology. Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures

    A cloud business intelligence security evaluation framework for small and medium enterprises

    Get PDF
    Cloud business intelligence has practical importance in data management and decision-making, but the adoption and use among South African small and medium enterprises remain relatively low compared to large business enterprises. The low uptake persists irrespective of the awareness and acceptance of the benefits of Cloud business intelligence in the business domain. Cloud business intelligence depends on the cloud computing paradigm, which is susceptible to security threats and risks that decision-makers must consider when selecting what applications to use. The major objective of this study was to propose a security evaluation framework for Cloud business intelligence suitable for use by small and medium enterprises in small South African towns. The study utilised the exploratory sequential mixed-method research methodology with decision-makers from five towns in the Limpopo Province. Both qualitative and quantitative methods were used to analyse the data. The findings show that the level of adoption of Cloud business intelligence in the five selected towns was lower than reported in the literature, and decision-makers were eager to adopt and use safe Cloud business intelligence, but this was hindered by their inability to evaluate security in these applications. Factors preventing the adoption of Cloud business intelligence were decision-makers’ limited knowledge of the applications and security evaluation, the inability to use industry security frameworks and standards due to their complexities, mistrust of cloud service providers in meeting their obligations when providing agreed services, and lack of security specialists to assist in the evaluation process. Small and medium enterprises used unapproved security evaluation methods, such as relying on friends who were not information technology security specialists. A security evaluation framework and checklists were proposed based on the findings of the study and the best practices of the existing industry frameworks and standards. The proposed security evaluation framework was validated for relevance by information technology security specialists and acceptance by small and medium enterprise decision-makers. The study concluded that the adoption and use of Cloud business intelligence were hindered by the lack of a user-friendly security evaluation framework and limited security evaluation knowledge among decision-makers. Furthermore, the study concluded that the proposed framework and checklists were a relevant solution as they were accepted as useful to assist decision-makers to select appropriate Cloud business intelligence for their enterprises. The main contribution of this study is the proposed security evaluation framework and the checklists for Cloud business intelligence, for use by decision-makers in small and medium enterprises in small South African towns in the Limpopo Province.School of ComputingPh. D. (Information Systems

    Energy-Efficient Software

    Get PDF
    The energy consumption of ICT is growing at an unprecedented pace. The main drivers for this growth are the widespread diffusion of mobile devices and the proliferation of datacenters, the most power-hungry IT facilities. In addition, it is predicted that the demand for ICT technologies and services will increase in the coming years. Finding solutions to decrease ICT energy footprint is and will be a top priority for researchers and professionals in the field. As a matter of fact, hardware technology has substantially improved throughout the years: modern ICT devices are definitely more energy efficient than their predecessors, in terms of performance per watt. However, as recent studies show, these improvements are not effectively reducing the growth rate of ICT energy consumption. This suggests that these devices are not used in an energy-efficient way. Hence, we have to look at software. Modern software applications are not designed and implemented with energy efficiency in mind. As hardware became more and more powerful (and cheaper), software developers were not concerned anymore with optimizing resource usage. Rather, they focused on providing additional features, adding layers of abstraction and complexity to their products. This ultimately resulted in bloated, slow software applications that waste hardware resources -- and consequently, energy. In this dissertation, the relationship between software behavior and hardware energy consumption is explored in detail. For this purpose, the abstraction levels of software are traversed upwards, from source code to architectural components. Empirical research methods and evidence-based software engineering approaches serve as a basis. First of all, this dissertation shows the relevance of software over energy consumption. Secondly, it gives examples of best practices and tactics that can be adopted to improve software energy efficiency, or design energy-efficient software from scratch. Finally, this knowledge is synthesized in a conceptual framework that gives the reader an overview of possible strategies for software energy efficiency, along with examples and suggestions for future research

    Design and implementation of a telemetry platform for high-performance computing environments

    Get PDF
    A new generation of high-performance and distributed computing applications and services rely on adaptive and dynamic architectures and execution strategies to run efficiently, resiliently, and at scale in today’s HPC environments. These architectures require insights into their execution behaviour and the state of their execution environment at various levels of detail, in order to make context-aware decisions. HPC telemetry provides this information. It describes the continuous stream of time series and event data that is generated on HPC systems by the hardware, operating systems, services, runtime systems, and applications. Current HPC ecosystems do not provide the conceptual models, infrastructure, and interfaces to collect, store, analyse, and integrate telemetry in a structured and efficient way. Consequently, applications and services largely depend on one-off solutions and custom-built technologies to achieve these goals; introducing significant development overheads that inhibit portability and mobility. To facilitate a broader mix of applications, more efficient application development, and swift adoption of adaptive architectures in production, a comprehensive framework for telemetry management and analysis must be provided as part of future HPC ecosystem designs. This thesis provides the blueprint for such a framework: it proposes a new approach to telemetry management in HPC: the Telemetry Platform concept. Departing from the observation that telemetry data and the corresponding analysis, and integration pat- terns on modern multi-tenant HPC systems have a lot of similarities to the patterns observed in large-scale data analytics or “Big Data” platforms, the telemetry platform concept takes the data platform paradigm and architectural approach and applies them to HPC telemetry. The result is the blueprint for a system that provides services for storing, searching, analysing, and integrating telemetry data in HPC applications and other HPC system services. It allows users to create and share telemetry data-driven insights using everything from simple time-series analysis to complex statistical and machine learning models while at the same time hiding many of the inherent complexities of data management such as data transport, clean-up, storage, cataloguing, access management, and providing appropriate and scalable analytics and integration capabilities. The main contributions of this research are (1) the application of the data platform concept to HPC telemetry data management and usage; (2) a graph-based, time-variant telemetry data model that captures structures and properties of platform and applications and in which telemetry data can be organized; (3) an architecture blueprint and prototype of a concrete implementation and integration architecture of the telemetry platform; and (4) a proposal for decoupled HPC application architectures, separating telemetry data management, and feedback-control-loop logic from the core application code. First experimental results with the prototype implementation suggest that the telemetry platform paradigm can reduce overhead and redundancy in the development of telemetry-based application architectures, and lower the barrier for HPC systems research and the provisioning of new, innovative HPC system services

    Telecommunication Economics

    Get PDF
    This book constitutes a collaborative and selected documentation of the scientific outcome of the European COST Action IS0605 Econ@Tel "A Telecommunications Economics COST Network" which run from October 2007 to October 2011. Involving experts from around 20 European countries, the goal of Econ@Tel was to develop a strategic research and training network among key people and organizations in order to enhance Europe's competence in the field of telecommunications economics. Reflecting the organization of the COST Action IS0605 Econ@Tel in working groups the following four major research areas are addressed: - evolution and regulation of communication ecosystems; - social and policy implications of communication technologies; - economics and governance of future networks; - future networks management architectures and mechanisms

    Success Factor-based Business Models for E-Commerce Platform Providers

    Get PDF
    E-commerce is booming and has become an integral part of everyday life. Especially the B2B industry is currently demonstrating an immense growth potential not only for the respective trading parties, but also in particular for providers of the necessary e-commerce platforms. Driven by disruptive forces and the accompanying rapid technological progress, the latter face a highly dynamic, complex, and intense competitive environment, which has a significant impact on their business models and its further development. In this context, entrepreneurial decisions are subject to strong uncertainties and risks. In order to support e-commerce platform providers focusing on customers in the B2B segment in their business model decisions, this thesis identifies key success factors specifically for their business models as well as ways for monitoring them. Using success factor research as research methodology, this applied research project conducted in the real world can be described as both interpretive and subjective and follows a social constructivist stance. In the process, 22 semi-structured interviews with e-commerce platform users operating in the B2B sector are conducted to obtain rich and in-depth information, which are then suitably analysed using template analysis. Based on the insights gained, the contribution of this research represents i) a blueprint of a success factor-based business model for e-commerce platform providers that also serves as a guide for implementation, ii) a tool for monitoring this model, as well as iii) a suitable business model innovation process model, which supports its proactive and sustainable further development. With that, the results of this work provide new insights for both scholars and practitioners and can have a major impact on the sustainable success of e-commerce platform providers’ business models and thus on corporate success

    A Generic method for assembling software product line components

    Get PDF
    Software product lines (SPL) facilitate the industrialization of software development. The main goal is to create a set of reusable software components for the rapid production of a software systems family. Many authors propose different approaches to implement and assemble the reusable components of an SPL. However, the construction and assembly of these components continue to be a complex and time-consuming process. This thesis analyzes the advantages and disadvantages of the current approaches to implement and assemble the reusable components of an SPL. Taking advantage of these elements and with the goal of developing a generic method (which can be applied to several software components developed in different software languages), we develop Fragment-oriented programming (FragOP), a framework to design, implement and reuse SPL domain components. FragOP is based on: (i) domain components, (ii) domain files, (iii) fragmentation points, (iv) fragments, (v) customization points, and (vi) customization files. FragOP was implemented in an open-source tool called VariaMos, and we also carried out three evaluations: (i) we created a clothing stores SPL, derived five different products, and discussed the results. (ii) We developed a discussion about the comparison between FragOP and other approaches. And (iii) we designed and executed a usability test of VariaMos to support the FragOP approach. The results show preliminary evidence that the use of FragOP reduces the manual intervention when assembling SPL domain components and it can be used as a generic method for assembling assets and SPL components developed in different software languages.Las líneas de productos de software (LPS) promueven la industrialización del desarrollo de software mediante la definición y ensamblaje de componentes reutilizables de software. Actualmente existen diferentes propuestas para implementar y ensamblar estos componentes. Sin embargo, su construcción y ensamblaje continúa siendo un proceso complejo y que requiere mucho tiempo. Esta tesis analiza las ventajas y desventajas de las diferentes estrategias actuales para implementación y ensamblaje de componentes de LPS. Con base en esto y con el objetivo de desarrollar un método genérico (el cual se pueda aplicar a múltiples componentes de software desarrollados en diferentes lenguajes), esta tesis desarrolla la programación orientada a fragmentos (FragOP), la cual define un marco de trabajo para diseñar, implementar y reutilizar componentes de dominio de LPS. FragOP se basa en: (i) componentes de dominio, (ii) archivos de dominio, (iii) puntos de fragmentación, (iv) fragmentos, (v) puntos de personalización, y (vi) archivos de personalización. Además, se realizó una implementación de FragOP en una herramienta llamada VariaMos, y se llevaron a cabo tres evaluaciones: (i) se creó una LPS de tiendas de ropa, se derivaron cinco productos y se discutieron los resultados. (ii) Se realizó una discusión acerca de la comparación de FragOP y otras propuestas actuales. Y (iii) se diseñó una prueba de usabilidad acerca del soporte de VariaMos para FragOP. Los resultados muestran evidencia preliminar de que el uso de FragOP reduce la intervención manual cuando se ensamblan componentes, y que FragOP puede usarse como un método genérico para el ensamblaje de componentes.Doctorad
    corecore