8,203 research outputs found

    Secure and Sustainable Load Balancing of Edge Data Centers in Fog Computing

    Full text link
    © 1979-2012 IEEE. Fog computing is a recent research trend to bring cloud computing services to network edges. EDCs are deployed to decrease the latency and network congestion by processing data streams and user requests in near real time. EDC deployment is distributed in nature and positioned between cloud data centers and data sources. Load balancing is the process of redistributing the work load among EDCs to improve both resource utilization and job response time. Load balancing also avoids a situation where some EDCs are heavily loaded while others are in idle state or doing little data processing. In such scenarios, load balancing between the EDCs plays a vital role for user response and real-Time event detection. As the EDCs are deployed in an unattended environment, secure authentication of EDCs is an important issue to address before performing load balancing. This article proposes a novel load balancing technique to authenticate the EDCs and find less loaded EDCs for task allocation. The proposed load balancing technique is more efficient than other existing approaches in finding less loaded EDCs for task allocation. The proposed approach not only improves efficiency of load balancing; it also strengthens the security by authenticating the destination EDCs

    Model checking for cloud autoscaling using WATERS

    Get PDF
    This thesis investigates the use of formal methods to verify cloud system designs against Service Level Agreements (SLAs), towards providing guarantees under uncertainty. We used WATERS (the Waikato Analysis Toolkit for Events in Reactive Systems), which is a model-checking tool based on discrete event systems. We created models for one aspect of cloud computing, horizontal autoscaling, and used this to verify cloud system designs against an SLA that specifies the maximum request response time. To evaluate the accuracy of the WATERS models, the cloud system designs are simulated on a private Kubernetes cluster, using JMeter to drive the workload. The results from Kubernetes are compared to the verification results from WATERS. A key research goal was to have these match as closely as possible, and to explain the discrepancies between the two. This process is followed for two applications: a default installation of NGINX, a web server with a fast but variable response time, and a hand-written Node.js program enforcing a fixed response time. The results suggest that WATERS can be used to predict potential SLA violations. Lessons learned include that the state space must be constrained to avoid excessive checking times, and we provide a method for doing so. An advantage of our model checking-based technique is that it verifies against all possible patterns of arriving requests (up to a given maximum), which would be impractical to test with a load testing tool such as JMeter. A key difference from existing work is our use non-probabilistic finite state machines, as opposed to probabilistic models which are prevalent in existing research. In addition, we have attempted to model the detail of the autoscaling process (a “white-box” approach), whereas much existing research attempts to find patterns between autoscaling parameters and SLA violation, effectively viewing autoscaling as a black-box process. Future work includes refining the WATERS models to more closely match Kubernetes, and modelling other SLO types. Other methods may also be used to limit the compilation and verification time for the models. This includes attempting different algorithms and perhaps editing the models to reduce the state space

    Event Stream Processing with Multiple Threads

    Full text link
    Current runtime verification tools seldom make use of multi-threading to speed up the evaluation of a property on a large event trace. In this paper, we present an extension to the BeepBeep 3 event stream engine that allows the use of multiple threads during the evaluation of a query. Various parallelization strategies are presented and described on simple examples. The implementation of these strategies is then evaluated empirically on a sample of problems. Compared to the previous, single-threaded version of the BeepBeep engine, the allocation of just a few threads to specific portions of a query provides dramatic improvement in terms of running time

    A Runtime Verification and Validation Framework for Self-Adaptive Software

    Get PDF
    The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules

    Analysis of SLA compliance in the cloud: An automated, model-based approach

    Get PDF
    Service Level Agreements (SLA) are commonly used to specify the quality attributes between cloud service providers and the customers. A violation of SLAs can result in high penalties. To allow the analysis of SLA compliance before the services are deployed, we describe in this paper an approach for SLA-aware deployment of services on the cloud, and illustrate its workflow by means of a case study. The approach is based on formal models combined with static analysis tools and generated runtime monitors. As such, it fits well within a methodology combining software development with information technology operations (DevOps)

    Introducing Development Features for Virtualized Network Services

    Get PDF
    Network virtualization and softwarizing network functions are trends aiming at higher network efficiency, cost reduction and agility. They are driven by the evolution in Software Defined Networking (SDN) and Network Function Virtualization (NFV). This shows that software will play an increasingly important role within telecommunication services, which were previously dominated by hardware appliances. Service providers can benefit from this, as it enables faster introduction of new telecom services, combined with an agile set of possibilities to optimize and fine-tune their operations. However, the provided telecom services can only evolve if the adequate software tools are available. In this article, we explain how the development, deployment and maintenance of such an SDN/NFV-based telecom service puts specific requirements on the platform providing it. A Software Development Kit (SDK) is introduced, allowing service providers to adequately design, test and evaluate services before they are deployed in production and also update them during their lifetime. This continuous cycle between development and operations, a concept known as DevOps, is a well known strategy in software development. To extend its context further to SDN/NFV-based services, the functionalities provided by traditional cloud platforms are not yet sufficient. By giving an overview of the currently available tools and their limitations, the gaps in DevOps for SDN/NFV services are highlighted. The benefit of such an SDK is illustrated by a secure content delivery network service (enhanced with deep packet inspection and elastic routing capabilities). With this use-case, the dynamics between developing and deploying a service are further illustrated
    • 

    corecore