415,293 research outputs found

    WEB Based Applications Testing: Analytical Approach towards Model Based Testing and Fuzz Testing

    Get PDF
    Web-based applications are complex in the structure which results in facing an immense amount of exploiting attacks, so testing should be done in a proactive way in order to identify threats in the applications. The intruder can explore these security loopholes and may exploit the application which results in economical lose, so testing the application becomes a supreme phase of development.  The main objective of testing is to secure the contents of applications either through static or automatic approach. The software houses usually follow fuzz based testing in which flaws can be explored by randomly inputting invalid data while on the other hand model-based testing is the automated approach which tests the applications from all perspectives on the basis of an abstract model of the application. The main theme of this research is to study the difference between fuzz based testing and MBT in terms of test coverage, performance, cost and time. This research work guides the web application practitioner in the selection of suitable methodology for different testing scenarios which save efforts imparted on testing and develop better and breaches free product.

    A Fuzzy Classifier-Based Penetration Testing for Web Applications

    Get PDF
    The biggest challenge of Web application is the inestimable losses arising from security flaws. Two approaches were advanced by a number of scholars to provide security to Web space. One of such approach is vulnerability assessment, which is a conscious effort to isolate, identify and recognize potentials vulnerabilities exploited by attackers. The second being the estimation and determination of level of risks/threats posed to Web applications by vul- nerabilities obvious to the developer (or tester); this is generally referred to as penetration testing. Recently, there is Vulnerability Assessment and Penetration Testing (VAPT) that combined these two schemes to improve safety and effec- tively combat the menace of attackers on Web applications. This paper proposed Fuzzy Classifier-based Vulnerability and Assessment Testing (FCVAPT) model to provide security for sensitive data/information in Web applications. Cross Site Scripting (XSS) and Structured Query Language (SQL) injections were selected for evaluation of proposed FCVAPT model. FCVAPT model’s classification performance for MSE, MAPE and RMSE were 33.33, 14.81% and 5.77% respectively. FCVAPT is considerably effective for detecting vulnerability and ascertaining the nature of threats/risks available to Web applications

    Web application testing: Using tree kernels to detect near-duplicate states in automated model inference

    Get PDF
    Background: In the context of End-to-End testing of web applications , automated exploration techniques (a.k.a. crawling) are widely used to infer state-based models of the site under test. These models, in which states represent features of the web application and transitions represent reachability relationships, can be used for several model-based testing tasks, such as test case generation. However, current exploration techniques often lead to models containing many near-duplicate states, i.e., states representing slightly different pages that are in fact instances of the same feature. This has a negative impact on the subsequent model-based testing tasks, adversely affecting, for example, size, running time, and achieved coverage of generated test suites. Aims: As a web page can be naturally represented by its tree-structured DOM representation, we propose a novel near-duplicate detection technique to improve the model inference of web applications, based on Tree Kernel (TK) functions. TKs are a class of functions that compute similarity between tree-structured objects, largely investigated and successfully applied in the Natural Language Processing domain. Method: To evaluate the capability of the proposed approach in detecting near-duplicate web pages, we conducted preliminary classification experiments on a freely-available massive dataset of about 100k manually annotated web page pairs. We compared the classification performance of the proposed approach with other state-of-the-art near-duplicate detection techniques. Results: Preliminary results show that our approach performs better than state-of-the-art techniques in the near-duplicate detection classification task. Conclusions: These promising results show that TKs can be applied to near-duplicate detection in the context of web application model inference, and motivate further research in this direction to assess the impact of the technique on the quality of the inferred models and on the subsequent application of model-based testing techniques

    A Fuzzy Classifier-Based Penetration Testing for Web Applications

    Get PDF
    The biggest challenge of Web application is the inestimable losses arising from security flaws. Two approaches were advanced by a number of scholars to provide security to Web space. One of such approach is vulnerability assessment, which is a conscious effort to isolate, identify and recognize potentials vulnerabilities exploited by attackers. The second being the estimation and determination of level of risks/threats posed to Web applications by vul- nerabilities obvious to the developer (or tester); this is generally referred to as penetration testing. Recently, there is Vulnerability Assessment and Penetration Testing (VAPT) that combined these two schemes to improve safety and effec- tively combat the menace of attackers on Web applications. This paper proposed Fuzzy Classifier-based Vulnerability and Assessment Testing (FCVAPT) model to provide security for sensitive data/information in Web applications. Cross Site Scripting (XSS) and Structured Query Language (SQL) injections were selected for evaluation of proposed FCVAPT model. FCVAPT model’s classification performance for MSE, MAPE and RMSE were 33.33, 14.81% and 5.77% respectively. FCVAPT is considerably effective for detecting vulnerability and ascertaining the nature of threats/risks available to Web applications

    Implementing Test Automation with Selenium WebDriver

    Get PDF
    Many software programs, such as applications for designing, modeling, simulating, and analyzing systems, are now commonly available as web-based applications. The testing of such sophisticated web applications is highly challenging and can be extremely tedious and error-prone if done manually. Recently automation tools have become increasingly used for testing web-based applications, as they minimize human involvement and repetitive work. For this problem report project, we have built and implemented an automation testing framework for web applications. The project specifically uses a tool called Selenium WebDriver, which has been used to develop the testing framework. By using this framework, testers may quickly and effectively write their test cases. The benefits of Selenium WebDriver include that it does not require in-depth research and training by testers, and due to the framework\u27s ability to take screenshots, it provides a useful way for developers to study their code. The framework relies on the Chrome web browser, along with Java running in Eclipse, to provide a user-friendly interface for constructing and running test suites. To validate the testing framework, we performed a case study involving NanoHub (nanoHUB.org), which is a well-known platform that provides valuable resources for those involved in nanotechnology research and education. NanoHub serves as an open-access repository for a wide range of tools, simulations, and information related to nanoscale science and engineering, and it is designed particularly to model and simulate electronic systems and nanoscale phenomena. Testing a website such as NanoHub.org typically encompasses a blend of functional testing, usability testing, and performance testing. Based on the results of this testing, several observations are made about the testing framework in general, and its application to NanoHub in particular. The comprehensive testing approach documented in this report is aimed at ensuring the platform functions as intended, provides a user-friendly experience, and delivers optimal performance. This testing is particularly crucial when dealing with tools and simulations related to electronic systems

    PERFORMANCE EVALUATION ON QUALITY OF ASIAN AIRLINES WEBSITES – AN AHP PPROACH

    Get PDF
    In recent years, many people have devoted their efforts to the issue of quality of Web site. The concept of quality is consisting of many criteria: quality of service perspective, a user perspective, a content perspective or indeed a usability perspective. Because of its possible instant worldwide audience a Website’s quality and reliability are crucial. The very special nature of the web applications and websites pose unique software testing challenges. Webmasters, Web applications developers, and Website quality assurance managers need tools and methods that can match up to the new needs. This research conducts some tests to measure the quality web site of Asian flag carrier airlines via web diagnostic tools online. We propose a methodology for determining and evaluate the best airlines websites based on many criteria of website quality. The approach has been implemented using Analytical Hierarchy Process (AHP) to generate the weights for the criteria which are much better and guarantee more fairly preference of criteria. The proposed model uses the AHP pairwise comparisons and the measure scale to generate the weights for the criteria which are much better and guarantee more fairly preference of criteria. The result of this study confirmed that the airlines websites of Asian are neglecting performance and quality criteria

    HDNA: A graph-based change detection in HTML pages(Deface Attack Detection)

    Full text link
    In this paper, a new approach called HDNA (HTML DNA) is introduced for analyzing and comparing Document Object Model (DOM) trees in order to detect differences in HTML pages. This method assigns an identifier to each HTML page based on its structure, which proves to be particularly useful for detecting variations caused by server-side updates, user interactions or potential security risks. The process involves preprocessing the HTML content generating a DOM tree and calculating the disparities between two or more trees. By assigning weights to the nodes valuable insights about their hierarchical importance are obtained. The effectiveness of the HDNA approach has been demonstrated in identifying changes in DOM trees even when dynamically generated content is involved. Not does this method benefit web developers, testers, and security analysts by offering a deeper understanding of how web pages evolve. It also helps ensure the functionality and performance of web applications. Additionally, it enables detection and response to vulnerabilities that may arise from modifications in DOM structures. As the web ecosystem continues to evolve HDNA proves to be a tool, for individuals engaged in web development, testing, or security analysis.Comment: 6 pages, 3 figure

    Model-based test case prioritization using selective and even-spread count-based methods with scrutinized ordering criterion

    Get PDF
    Regression testing is crucial in ensuring that modifications made did not introduce any adverse effect on the software being modified. However, regression testing suffers from execution cost and time consumption problems. Test case prioritization (TCP) is one of the techniques used to overcome these issues by re-ordering test cases based on their priorities. Model-based TCP (MB-TCP) is an approach in TCP where the software models are manipulated to perform prioritization. The issue with MB-TCP is that most of the existing approaches do not provide satisfactory faults detection capability. Besides, their granularity of test selection criteria is not very good and this can affect prioritization effectiveness. This study proposes an MB-TCP approach that can improve the faults detection performance of regression testing. It combines the implementation of two existing approaches from the literature while incorporating an additional ordering criterion to boost prioritization efficacy. A detailed empirical study is conducted with the aim to evaluate and compare the performance of the proposed approach with the selected existing approaches from the literature using the average of the percentage of faults detected (APFD) metric. Three web applications were used as the objects of study to obtain the required test suites that contained the tests to be prioritized. From the result obtained, the proposed approach yields the highest APFD values over other existing approaches which are 91%, 86% and 91% respectively for the three web applications. These higher APFD values signify that the proposed approach is very effective in revealing faults early during testing. They also show that the proposed approach can improve the faults detection performance of regression testing

    Effective Detection of Vulnerable and Malicious Browser Extensions

    Get PDF
    Unsafely coded browser extensions can compromise the security of a browser, making them attractive targets for attackers as a primary vehicle for conducting cyber-attacks. Among others, the three factors making vulnerable extensions a high-risk security threat for browsers include: i) the wide popularity of browser extensions, ii) the similarity of browser extensions with web applications, and iii) the high privilege of browser extension scripts. Furthermore, mechanisms that specifically target to mitigate browser extension-related attacks have received less attention as opposed to solutions that have been deployed for common web security problems (such as SQL injection, XSS, logic flaws, client-side vulnerabilities, drive-by-download, etc.). To address these challenges, recently some techniques have been proposed to defend extension-related attacks. These techniques mainly focus on information flow analysis to capture suspicious data flows, impose privilege restriction on API calls by malicious extensions, apply digital signatures to monitor process and memory level activities, and allow browser users to specify policies in order to restrict the operations of extensions. This article presents a model-based approach to detect vulnerable and malicious browser extensions by widening and complementing the existing techniques. We observe and utilize various common and distinguishing characteristics of benign, vulnerable, and malicious browser extensions. These characteristics are then used to build our detection models, which are based on the Hidden Markov Model constructs. The models are well trained using a set of features extracted from a number of browser extensions together with user supplied specifications. Along the course of this study, one of the main challenges we encountered was the lack of vulnerable and malicious extension samples. To address this issue, based on our previous knowledge on testing web applications and heuristics obtained from available vulnerable and malicious extensions, we have defined rules to generate training samples. The approach is implemented in a prototype tool and evaluated using a number of Mozilla Firefox extensions. Our evaluation indicated that the approach not only detects known vulnerable and malicious extensions, but also identifies previously undetected extensions with a negligible performance overhead
    • …
    corecore