81 research outputs found

    Load Testing of Vaadin Flow applications

    Get PDF
    All types of businesses, from small start-ups to big enterprises, have an online presence. Their web pages and applications can be used to acquire products ans services and are thus expected to be efficient. Yet, the web environment imposes additional requirements on software, such as the need for reliable security and adequate response times. To ensure these requirements are met and the product is of the expected quality, various types of testing are utilized during development. This master’s thesis evaluates a procedure for verifying a non-functional requirement of a web application – its performance. It focuses on load testing, which is used to analyze and assess an application’s behavior with different user loads. The scope of applications is limited to server-side applications that are developed with the latest long-term support version of the Vaadin framework. The effects of the performance arising from the server-side architecture of the framework and the Java ecosystem are reviewed. Furthermore, an overview of available improvement techniques, such as cache and load balancers, is given. From a load testing perspective, the biggest challenges that arise from Vaadin’s architecture are its unique features. These include node values of user interface components, synchronization and Cross-Site Request Forgery protection tokens. The defined universal regular expressions that capture these attributes can be used again later. The main contribution of this thesis is formulating a ready-to-use method of load testing a Vaadin Flow application. Once established and analyzed, the method is then applied to a real-life situation to verify its applicability and usefulness. Two widely used load testing tools are utilized – JMeter and Gatling. Furthermore, a method to estimate a web application’s session size is presented. Potential bottlenecks and other potential issues are identified by using a profiled to track the application’s memory consumption during a test run. After the load test is finalized and completed, a session size estimation is conducted. As a result of test execution, a potential bottleneck is identified and fixed in the application. Complete test plans for both JMeter and Gatling are defined and implemented. Alternatives and possible improvements to the proposed solution are reviewed. Based on the literature review, when deploying an application on multiple servers, the best solution is enabling the sticky sessions feature

    Improving Cloud Middlebox Infrastructure for Online Services

    Get PDF
    Middleboxes are an indispensable part of the datacenter networks that provide high availability, scalability and performance to the online services. Using load balancer as an example, this thesis shows that the prevalent scale-out middlebox designs using commodity servers are plagued with three fundamental problems: (1) The server-based layer-4 middleboxes are costly and inflate round-trip-time as much as 2x by processing the packets in software. (2) The middlebox instances cause traffic detouring en route from sources to destinations, which inflates network bandwidth usage by as much as 3.2x and can cause transient congestion. (3) Additionally, existing cloud providers do not support layer-7 middleboxes as a service, and third-party proxy-based layer-7 middlebox design exhibits poor availability as TCP state stored locally on middlebox instances are lost upon instance failure. This thesis examines the root causes of the above problems and proposes new cloud-scale middlebox design principles that systemically address all three problems. First, to address the performance problem, we make a key observation that existing commodity switches have resources available to implement key layer-4 middlebox functionalities such as load balancer, and by processing packets in hardware, switches offer low latency and high capacity benefits, at no additional cost as the switch resources are idle. Motivated by this observation, we propose the design principle of using idle switch resources to accelerate middlebox functionailites. To demonstrate the principle, we developed the complete L4 load balancer design that uses commodity switches for low cost and high performance, and carefully fuses a few software load balancer instances to provide for high availability. Second, to address the high network overhead problem from traffic detouring through middlebox instances, we propose to exploit the principles of locality and flexibility in placing the middlebox instances and servers to handle the traffic closer to the sources and reduce the overall traffic and link utilization in the network. Third, to provide high availability in a layer 7 middleboxes, we propose a novel middlebox design principle of decoupling the TCP state from middlebox instances and storing it in persistent key-value store so that any middlebox instance can seamlessly take over any TCP connection when middlebox instances fail. We demonstrate the effectiveness of the above cloud-scale middlebox design principles using load balancers as an example. Specifically, we have prototyped the three design principles in three cloud-scale load balancers: Duet, Rubik, and Yoda, respectively. Our evaluation using a datacenter testbed and large scale simulations show that Duet lowers the costs by 12x and latency overhead by 1000x, Rubik further lowers the datacenter network traffic overhead by 3x, and Yoda L7 Load balancer-as-a-service is practical; decoupling TCP state from load balancer instances has a negligible

    An adaptive admission control and load balancing algorithm for a QoS-aware Web system

    Get PDF
    The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis

    Reducing Data Copying Overhead in Web Servers

    Get PDF
    Web servers that generate dynamic content are widely used in the development of Internet applications. With the Internet highly connected to people’s lifestyles, the service requirements of Internet applications have increased significantly. This increasing trend intensifies the need to improve server performance in dynamic content generation. In this thesis, we describe the opportunity to improve server performance by co-locating the web server and the application server on the same machine. We identify related work and discuss their respective advantages and deficiencies. We then introduce and explain our technique that passes the client socket’s file descriptor from the web server process to the application server. This allows the application server to reply to the client directly, reducing the amount of data copied and improving server performance. Experiments were designed to evaluate the performance of this technique and provide a detailed analysis of processor time and data copying during response delivery. A performance comparison against alternative approaches has been performed. We analyze the results to understand factors in data copying efficiency and determine that cache misses are an important factor in server performance. There are four major contributions in this thesis. First, we show that in multiprocessor environments, co-locating web servers and application servers can take advantage of faster communication. Second, we introduce a new technique that reduces the amount of data copied by two-thirds. This technique requires no modifications to the application server code (other existing techniques do), and it is also applicable in a variety of systems, allowing easy adoption in production environments. Third, we provide a performance comparison against other approaches and raise questions regarding data copying efficiency. Our technique attains an average peak throughput of 1.27 times the FastCGI with Unix domain sockets in both uniprocessor and multiprocessor environments. Finally, our analysis on the effect of cache misses on server performance provides valuable insights into why these benefits are obtained

    AsiakasreunakytkennÀn testausalustan kehitys

    Get PDF
    Customer Edge Switching (CES) and Realm Gateway (RGW) are technologies designed to solve core challenges of the modern Internet. Challenges include the ever increasing amount of devices connected to the Internet and risks created by malicious parties. CES and RGW leverage existing technologies like Domain Name System (DNS). Software testing is critical for ensuring correctness of software. It aims to ensure that products and protocols operate correctly. Testing also aims to find any critical vulnerabilities in the products. Fuzz testing is a field of software testing allowing automatic iteration of unexpected inputs. In this thesis work we evaluate two CES versions in performance, in susceptibility of Denial of Service (DoS) and in weaknesses related to use of DNS. Performance is an important metric for switches. Denial of Service is a very common attack vector and use of DNS in new ways requires critical evaluation. The performance of the old version was sufficient. Some clear issues were found. The version was vulnerable against DoS. Oversights in DNS operation were found. The new version shows improvement over the old one. We also evaluated suitability of expanding Robot Framework for fuzz testing Customer Edge Traversal Protocol (CETP). We conclude that the use of the Framework was not the best approach. We also developed a new testing framework using Robot Framework for the new version of CES.Customer Edge Switching (CES) asiakasreunakytkentÀ ja Realm Gateway (RGW) alueen yhdyskÀytÀvÀ tarjoavat ratkaisuja modernin Internetin ydinongelmiin. Ydinongelmiin kuuluvat kytkettyjen laitteiden mÀÀrÀn jatkuva kasvu ja pahantahtoisten tahojen luomat riskit. CES ja RGW hyödyntÀvÀt olemassa olevia tekniikoita kuten nimipalvelua (DNS). Ohjelmistojen oikeellisuuden varmistuksessa testaus on vÀlttÀmÀtöntÀ. Sen tavoitteena on varmistaa tuotteiden ja protokollien oikea toiminnallisuus. Testaus myös yrittÀÀ löytÀÀ kriittiset haavoittuvuudet ohjelmistoissa. Sumea testaus on ohjelmistotestauksen alue, joka mahdollistaa odottamattomien syötteiden automaattisen lÀpikÀynnin. TÀssÀ työssÀ arvioimme kahden CES version suorituskykyÀ, palvelunestohyökkÀyksien sietoa ja nimipalvelun kÀyttöön liittyviÀ heikkouksia. Suorituskyky on tÀrkeÀ mittari kytkimille. Palvelunesto on erittÀin yleinen hyökkÀystapa ja nimipalvelun uudenlainen kÀyttö vaatii kriittistÀ arviointia. Vanhan version suorituskyky oli riittÀvÀ. Joitain selviÀ ongelmia löydettiin. Versio oli haavoittuvainen palvelunestohyökkÀyksille. Löysimme epÀtarkkuuksia nimipalveluiden toiminnassa. Uusi versio vaikuttaa paremmalta kuin vanha versio. Arvioimme työssÀ myös Robot Framework testausalustan laajentamisen soveltuvuutta Customer Edge Traversal Protocol (CETP) asiakasreunalÀvistysprotokollan sumeaan testaukseen. Toteamme, ettei alustan kÀyttö ollut paras lÀhestymistapa. EsitÀmme myös työmme Robot Framework alustaa hyödyntÀvÀn testausalustan kehityksessÀ nykyiselle CES versiolle. Kehitimme myös uuden testausalustan uudelle CES versiolle hyödyntÀen Robot Frameworkia

    Mitigating Botnet-based DDoS Attacks against Web Servers

    Get PDF
    Distributed denial-of-service (DDoS) attacks have become wide-spread on the Internet. They continuously target retail merchants, financial companies and government institutions, disrupting the availability of their online resources and causing millions of dollars of financial losses. Software vulnerabilities and proliferation of malware have helped create a class of application-level DDoS attacks using networks of compromised hosts (botnets). In a botnet-based DDoS attack, an attacker orders large numbers of bots to send seemingly regular HTTP and HTTPS requests to a web server, so as to deplete the server's CPU, disk, or memory capacity. Researchers have proposed client authentication mechanisms, such as CAPTCHA puzzles, to distinguish bot traffic from legitimate client activity and discard bot-originated packets. However, CAPTCHA authentication is vulnerable to denial-of-service and artificial intelligence attacks. This dissertation proposes that clients instead use hardware tokens to authenticate in a federated authentication environment. The federated authentication solution must resist both man-in-the-middle and denial-of-service attacks. The proposed system architecture uses the Kerberos protocol to satisfy both requirements. This work proposes novel extensions to Kerberos to make it more suitable for generic web authentication. A server could verify client credentials and blacklist repeated offenders. Traffic from blacklisted clients, however, still traverses the server's network stack and consumes server resources. This work proposes Sentinel, a dedicated front-end network device that intercepts server-bound traffic, verifies authentication credentials and filters blacklisted traffic before it reaches the server. Using a front-end device also allows transparently deploying hardware acceleration using network co-processors. Network co-processors can discard blacklisted traffic at the hardware level before it wastes front-end host resources. We implement the proposed system architecture by integrating existing software applications and libraries. We validate the system implementation by evaluating its performance under DDoS attacks consisting of floods of HTTP and HTTPS requests

    A review of the Siyakhula Living Lab’s network solution for Internet in marginalized communities

    Get PDF
    Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”.Thesis (MSc) -- Faculty of Science, Computer Science, 202

    A review of the Siyakhula Living Lab’s network solution for Internet in marginalized communities

    Get PDF
    Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”.Thesis (MSc) -- Faculty of Science, Computer Science, 202

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning
    • 

    corecore