239 research outputs found

    Governance of Cloud-hosted Web Applications

    Get PDF
    Cloud computing has revolutionized the way developers implement and deploy applications. By running applications on large-scale compute infrastructures and programming platforms that are remotely accessible as utility services, cloud computing provides scalability, high availability, and increased user productivity.Despite the advantages inherent to the cloud computing model, it has also given rise to several software management and maintenance issues. Specifically, cloud platforms do not enforce developer best practices, and other administrative requirements when deploying applications. Cloud platforms also do not facilitate establishing service level objectives (SLOs) on application performance, which are necessary to ensure reliable and consistent operation of applications. Moreover, cloud platforms do not provide adequate support to monitor the performance of deployed applications, and conduct root cause analysis when an application exhibits a performance anomaly.We employ governance as a methodology to address the above mentioned issues prevalent in cloud platforms. We devise novel governance solutions that achieve administrative conformance, developer best practices, and performance SLOs in the cloud via policy enforcement, SLO prediction, performance anomaly detection and root cause analysis. The proposed solutions are fully automated, and built into the cloud platforms as cloud-native features thereby precluding the application developers from having to implement similar features by themselves. We evaluate our methodology using real world cloud platforms, and show that our solutions are highly effective and efficient

    Management of customizable software-as-a-service in cloud and network environments

    Get PDF

    Node.js pohjaisen mikropalveluarkkitehtuurin skaalautuvuus Herokussa

    Get PDF
    Microservices are a method for creating distributed services. Instead of monolithic applications, where all of the functionality runs inside the same process, every microservice specializes in a specific task. This allows for more fine-grained scaling and utilization of the individual services, while also making the microservices easier to reason about. Push notifications can cause unexpectedly high loads for services, especially when they are being sent to all users. With enough users, this load can become overwhelming. Most services have three options to meet this increased demand: scale the service either horizontally or vertically, improve the performance of the service or send the notifications in batches. In our service, we chose to implement the batched sending of notifications. This caused issues in the amount of time it took to send all notifications. Instead of a short peak in traffic, the service had to manage consistently high load for a long period of time. This thesis is part literary study, where we research microservices in more detail and go through the more common architectural patterns associated with them. We explore a production service that had issues with meeting the demand during high load caused by push notifications. To understand the production environment and its restrictions, we also explain the runtime, Node.js, and the cloud provider, Heroku, that were used. We go through the clustering implementation details that allowed our API gateway to scale vertically more effectively. Based on our performance evaluation of an example Node.js application and our production environment, clustering is an easy and effective way to enable vertical scaling for Node.js applications. However, even with better hardware, there still exists a breaking point where the service can not manage any more traffic and autoscaling is not good enough to meet the demand. A service requires constant monitoring and performance improvements from the development team to be able to meet high demand

    Linear Scalability of Distributed Applications

    Get PDF
    The explosion of social applications such as Facebook, LinkedIn and Twitter, of electronic commerce with companies like Amazon.com and Ebay.com, and of Internet search has created the need for new technologies and appropriate systems to manage effectively a considerable amount of data and users. These applications must run continuously every day of the year and must be capable of surviving sudden and abrupt load increases as well as all kinds of software, hardware, human and organizational failures. Increasing (or decreasing) the allocated resources of a distributed application in an elastic and scalable manner, while satisfying requirements on availability and performance in a cost-effective way, is essential for the commercial viability but it poses great challenges in today's infrastructures. Indeed, Cloud Computing can provide resources on demand: it now becomes easy to start dozens of servers in parallel (computational resources) or to store a huge amount of data (storage resources), even for a very limited period, paying only for the resources consumed. However, these complex infrastructures consisting of heterogeneous and low-cost resources are failure-prone. Also, although cloud resources are deemed to be virtually unlimited, only adequate resource management and demand multiplexing can meet customer requirements and avoid performance deteriorations. In this thesis, we deal with adaptive management of cloud resources under specific application requirements. First, in the intra-cloud environment, we address the problem of cloud storage resource management with availability guarantees and find the optimal resource allocation in a decentralized way by means of a virtual economy. Data replicas migrate, replicate or delete themselves according to their economic fitness. Our approach responds effectively to sudden load increases or failures and makes best use of the geographical distance between nodes to improve application-specific data availability. We then propose a decentralized approach for adaptive management of computational resources for applications requiring high availability and performance guarantees under load spikes, sudden failures or cloud resource updates. Our approach involves a virtual economy among service components (similar to the one among data replicas) and an innovative cascading scheme for setting up the performance goals of individual components so as to meet the overall application requirements. Our approach manages to meet application requirements with the minimum resources, by allocating new ones or releasing redundant ones. Finally, as cloud storage vendors offer online services at different rates, which can vary widely due to second-degree price discrimination, we present an inter-cloud storage resource allocation method to aggregate resources from different storage vendors and provide to the user a system which guarantees the best rate to host and serve its data, while satisfying the user requirements on availability, durability, latency, etc. Our system continuously optimizes the placement of data according to its type and usage pattern, and minimizes migration costs from one provider to another, thereby avoiding vendor lock-in

    REST4Mobile: A framework for enhanced usability of REST services on smartphones

    Get PDF
    Considering end-user research and proliferation of smartphones and REpresentational State Transfer (REST) interfaces, we envisage that smartphone owners can innovate to compose applications on the small screen. This paper presents the design and evaluation of a REST service development framework (viz, REST4Mobile) with the aim to enhance the usability when consuming on smartphones. Our design process uses the usability factors identified in our previous work as primary constraints for modeling the framework and a corresponding composition tool. Thus, sample REST services are developed with and then without the framework, and usability of composing the services on smartphones is evaluated. Evaluation was conducted by deploying the component REST services, the composition tool, and the resulting composite apps on a local machine. As the task of service composition is conducted directly on the smartphone's screen, the evaluation process is designed to be repeatable on remote servers and on the cloud. Results showed that constraints can be added into the REST architectural style on the basis of the influences of domain specific terms and human cognitive capabilities on the naming and size of the Uniform Resource Identifiers (URIs). In addition, the principles embodying the framework are found to be influential factors in enhancing the usability of REST services on smartphones

    End-to-end security in service-oriented architecture

    Get PDF
    A service-oriented architecture (SOA)-based application is composed of a number of distributed and loosely-coupled web services, which are orchestrated to accomplish a more complex functionality. Any of these web services is able to invoke other web services to offload part of its functionality. The main security challenge in SOA is that we cannot trust the participating web services in a service composition to behave as expected all the time. In addition, the chain of services involved in an end-to-end service invocation may not be visible to the clients. As a result, any violation of client’s policies could remain undetected. To address these challenges in SOA, we proposed the following contributions. First, we devised two composite trust schemes by using graph abstraction to quantitatively maintain the trust levels of different services. The composite trust values are based on feedbacks from the actual execution of services, and the structure of the SOA application. To maintain the dynamic trust, we designed the trust manager, which is a trusted-third party service. Second, we developed an end-to-end inter-service policy monitoring and enforcement framework (PME framework), which is able to dynamically inspect the interactions between services at runtime and react to the potentially malicious activities according to the client’s policies. Third, we designed an intra-service policy monitoring and enforcement framework based on taint analysis mechanism to monitor the information flow within services and prevent information disclosure incidents. Fourth, we proposed an adaptive and secure service composition engine (ASSC), which takes advantage of an efficient heuristic algorithm to generate optimal service compositions in SOA. The service compositions generated by ASSC maximize the trustworthiness of the selected services while meeting the predefined QoS constraints. Finally, we have extensively studied the correctness and performance of the proposed security measures based on a realistic SOA case study. All experimental studies validated the practicality and effectiveness of the presented solutions

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    Design and performance evaluation of advanced QoS-enabled service-oriented architectures for the Internet of Things

    Get PDF
    The Internet of Things (IoT) is rapidly becoming reality, the cut off prices as well as the advancement in the consumer electronic field are the two main training factor. For this reason, new application scenarios are designed every days and then new challenges that must be addressed. In the future we will be surrounded by many smart devices, which will sense and act on the physical environment. Such number of smart devices will be the building block for a plethora of new smart applications which will provide to end user new enhanced service. In this context, the Quality of Service (QoS) has been recognized as a non functional key requirement for the success of the IoT. In fact, in the future IoT, we will have different applications each one with different QoS requirements, which will need to interact with a finite set of smart device each one with its QoS capabilities. Such mapping between requested and offered QoS must be managed in order to satisfy the end users. The work of this thesis focus on how to provide QoS for IoT in a cross-layer manner. In other words, our main goal is to provide QoS support that, on one hand, helps the back-end architecture to manage a wide set of IoT applications, each one with its QoS requirements, while, on the other hand, enhances the access network by adding QoS capabilities on top of smart devices. We analyzed existing QoS framework and, based on the status of the art, we derive a novel model specifically tailored for IoT systems. Then we define the procedures needed to negotiate the desired QoS level and to enforce the negotiated QoS. In particular we take care of the Thing selection problem which is raised whenever more than one thing can be exploited to obtain a certain service. Finally we considered the access network by providing different solutions to handle QoS with different grain scale. We proposed a totally transparent solution which exploits virtualization and proxying techniques to differentiate between different class of client and provide a class based prioritization schema. Then we went further by designing a QoS framework directly on top of a standard IoT protocol called Constrained Application Protocol (CoAP). We designed the QoS support to enhance the Observing paradigm which is of paramount importance especially if we consider industrial applications which might benefit from a certain level of QoS assurances

    The integrity of digital technologies in the evolving characteristics of real-time enterprise architecture

    Get PDF
    Advancements in interactive and responsive enterprises involve real-time access to the information and capabilities of emerging technologies. Digital technologies (DTs) are emerging technologies that provide end-to-end business processes (BPs), engage a diversified set of real-time enterprise (RTE) participants, and institutes interactive DT services. This thesis offers a selection of the author’s work over the last decade that addresses the real-time access to changing characteristics of information and integration of DTs. They are critical for RTEs to run a competitive business and respond to a dynamic marketplace. The primary contributions of this work are listed below. • Performed an intense investigation to illustrate the challenges of the RTE during the advancement of DTs and corresponding business operations. • Constituted a practical approach to continuously evolve the RTEs and measure the impact of DTs by developing, instrumenting, and inferring the standardized RTE architecture and DTs. • Established the RTE operational governance framework and instituted it to provide structure, oversight responsibilities, features, and interdependencies of business operations. • Formulated the incremental risk (IR) modeling framework to identify and correlate the evolving risks of the RTEs during the deployment of DT services. • DT service classifications scheme is derived based on BPs, BP activities, DT’s paradigms, RTE processes, and RTE policies. • Identified and assessed the evaluation paradigms of the RTEs to measure the progress of the RTE architecture based on the DT service classifications. The starting point was the author’s experience with evolving aspects of DTs that are disrupting industries and consequently impacting the sustainability of the RTE. The initial publications emphasized innovative characteristics of DTs and lack of standardization, indicating the impact and adaptation of DTs are questionable for the RTEs. The publications are focused on developing different elements of RTE architecture. Each published work concerns the creation of an RTE architecture framework fit to the purpose of business operations in association with the DT services and associated capabilities. The RTE operational governance framework and incremental risk methodology presented in subsequent publications ensure the continuous evolution of RTE in advancements of DTs. Eventually, each publication presents the evaluation paradigms based on the identified scheme of DT service classification to measure the success of RTE architecture or corresponding elements of the RTE architecture
    • …
    corecore