124 research outputs found

    The approaches to quantify web application security scanners quality: A review

    Get PDF
    The web application security scanner is a computer program that assessed web application security with penetration testing technique. The benefit of automated web application penetration testing is huge, which web application security scanner not only reduced the time, cost, and resource required for web application penetration testing but also eliminate test engineer reliance on human knowledge. Nevertheless, web application security scanners are possessing weaknesses of low test coverage, and the scanners are generating inaccurate test results. Consequently, experimentations are frequently held to quantitatively quantify web application security scanner's quality to investigate the web application security scanner's strengths and limitations. However, there is a discovery that neither a standard methodology nor criterion is available for quantifying the web application security scanner's quality. Hence, in this paper systematic review is conducted and analysed the methodology and criterion used for quantifying web application security scanners' quality. In this survey, the experiment methodologies and criterions that had been used to quantify web application security scanner's quality is classified and review using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol. The objectives are to provide practitioners with the understanding of methodologies and criterions that available for measuring web application security scanners' test coverage, attack coverage, and vulnerability detection rate, while provides the critical hint for development of the next testing framework, model, methodology, or criterions, to measure web application security scanner quality

    Nástroj pro penetrační testování webových aplikací

    Get PDF
    Abstract As hackers become more skilled and sophisticated and with cyber-attacks becoming the norm, it is more important than ever before to undertake regular vulnerability scans and penetration testing to identify vulnerabilities and ensure on a regular basis that the cyber controls are working. In this thesis the importance and working of penetration testing and web application based penetration testing are discussed, followed by comparison and information’s about various testing tools and techniques and their advantages and disadvantages. The next section of the thesis mainly focuses on the past, current and future state of penetration testing in the computer systems and application security, importance of General Data Protection Regulation (GDPR) and Content Management system (CMS) followed by the main goal of the thesis which explains the existing solutions in automated tools for vulnerability detection of web application their techniques, positive and negative results of the conducted tests and their merits and demerits. In the next section, based on the comparison of various existing tools selecting appropriate algorithm for discussing the importance of scanning the ports which are usually focused in very few existing web application tools, the following section practically demonstrate the scanning of ports which gives information regarding, the state of ports to understand the service information running on the server. Finally the result of the experiment will be compared with the existing web application tools.Abstraktní Vzhledem k tomu, že se hackeři stávají zkušenějšími a sofistikovanějšími a kybernetické útoky se stávají normou, je důležitější než kdy jindy provádět pravidelné kontroly zranitelnosti a penetrační testování, aby bylo možné identifikovat zranitelná místa a pravidelně zajišťovat fungování kybernetických kontrol. V této práci je diskutován význam a fungování penetračního testování a penetračního testování založeného na webových aplikacích, následuje srovnání a informace o různých testovacích nástrojích a technikách a jejich výhodách a nevýhodách. Další část práce se zaměřuje především na minulý, současný a budoucí stav penetračního testování v počítačových systémech a zabezpečení aplikací, význam nařízení o obecné ochraně údajů (GDPR) a redakčního systému (CMS) následovaného hlavním cílem práce, která vysvětluje stávající řešení v automatizovaných nástrojích pro zjišťování zranitelnosti webové aplikace, jejich techniky, pozitivní a negativní výsledky provedených testů a jejich přednosti a nedostatky. V další části, založené na srovnání různých existujících nástrojů, které vybírají vhodný algoritmus pro diskusi o důležitosti skenování portů, které jsou obvykle zaměřeny na velmi málo stávajících webových aplikací, následující část prakticky demonstruje skenování portů, které poskytují informace týkající se, stav portů pro pochopení informací o službě běžících na serveru. Nakonec bude výsledek experimentu porovnán s existujícími nástroji webové aplikace.460 - Katedra informatikyvelmi dobř

    Security slicing for auditing common injection vulnerabilities

    Get PDF
    Cross-site scripting and injection vulnerabilities are among the most common and serious security issues for Web applications. Although existing static analysis approaches can detect potential vulnerabilities in source code, they generate many false warnings and source-sink traces with irrelevant information, making their adoption impractical for security auditing. One suitable approach to support security auditing is to compute a program slice for each sink, which contains all the information required for security auditing. However, such slices are likely to contain a large amount of information that is irrelevant to security, thus raising scalability issues for security audits. In this paper, we propose an approach to assist security auditors by defining and experimenting with pruning techniques to reduce original program slices to what we refer to as security slices, which contain sound and precise information. To evaluate the proposed approach, we compared our security slices to the slices generated by a state-of-the-art program slicing tool, based on a number of open-source benchmarks. On average, our security slices are 76% smaller than the original slices. More importantly, with security slicing, one needs to audit approximately 1% of the total code to fix all the vulnerabilities, thus suggesting significant reduction in auditing costs

    Attack Taxonomy Methodology Applied to Web Services

    Get PDF
    With the rapid evolution of attack techniques and attacker targets, companies and researchers question the applicability and effectiveness of security taxonomies. Although the attack taxonomies allow us to propose a classification scheme, they are easily rendered useless by the generation of new attacks. Due to its distributed and open nature, web services give rise to new security challenges. The purpose of this study is to apply a methodology for categorizing and updating attacks prior to the continuous creation and evolution of new attack schemes on web services. Also, in this research, we collected thirty-three (33) types of attacks classified into five (5) categories, such as brute force, spoofing, flooding, denial-of-services, and injection attacks, in order to obtain the state of the art of vulnerabilities against web services. Finally, the attack taxonomy is applied to a web service, modeling through attack trees. The use of this methodology allows us to prevent future attacks applied to many technologies, not only web services.Con la rápida evolución de las técnicas de ataque y los objetivos de los atacantes, las empresas y los investigadores cuestionan la aplicabilidad y eficacia de las taxonomías de seguridad. Si bien las taxonomías de ataque nos permiten proponer un esquema de clasificación, son fácilmente inutilizadas por la generación de nuevos ataques. Debido a su naturaleza distribuida y abierta, los servicios web plantean nuevos desafíos de seguridad. El propósito de este estudio es aplicar una metodología para categorizar y actualizar ataques previos a la continua creación y evolución de nuevos esquemas de ataque a servicios web. Asimismo, en esta investigación recolectamos treinta y tres (33) tipos de ataques clasificados en cinco (5) categorías, tales como fuerza bruta, suplantación de identidad, inundación, denegación de servicios y ataques de inyección, con el fin de obtener el estado del arte de las vulnerabilidades contra servicios web. Finalmente, se aplica la taxonomía de ataque a un servicio web, modelado a través de árboles de ataque. El uso de esta metodología nos permite prevenir futuros ataques aplicados a muchas tecnologías, no solo a servicios web

    A new unified intrusion anomaly detection in identifying unseen web attacks

    Get PDF
    The global usage of more sophisticated web-based application systems is obviously growing very rapidly. Major usage includes the storing and transporting of sensitive data over the Internet. The growth has consequently opened up a serious need for more secured network and application security protection devices. Security experts normally equip their databases with a large number of signatures to help in the detection of known web-based threats. In reality, it is almost impossible to keep updating the database with the newly identified web vulnerabilities. As such, new attacks are invisible. This research presents a novel approach of Intrusion Detection System (IDS) in detecting unknown attacks on web servers using the Unified Intrusion Anomaly Detection (UIAD) approach. The unified approach consists of three components (preprocessing, statistical analysis, and classification). Initially, the process starts with the removal of irrelevant and redundant features using a novel hybrid feature selection method. Thereafter, the process continues with the application of a statistical approach to identifying traffic abnormality. We performed Relative Percentage Ratio (RPR) coupled with Euclidean Distance Analysis (EDA) and the Chebyshev Inequality Theorem (CIT) to calculate the normality score and generate a finest threshold. Finally, Logitboost (LB) is employed alongside Random Forest (RF) as a weak classifier, with the aim of minimising the final false alarm rate. The experiment has demonstrated that our approach has successfully identified unknown attacks with greater than a 95% detection rate and less than a 1% false alarm rate for both the DARPA 1999 and the ISCX 2012 datasets

    Web Application Weakness Ontology Based on Vulnerability Data

    Full text link
    Web applications are becoming more ubiquitous. All manner of physical devices are now connected and often have a variety of web applications and web-interfaces. This proliferation of web applications has been accompanied by an increase in reported software vulnerabilities. The objective of this analysis of vulnerability data is to understand the current landscape of reported web application flaws. Along those lines, this work reviews ten years (2011 - 2020) of vulnerability data in the National Vulnerability Database. Based on this data, most common web application weaknesses are identified and their profiles presented. A weakness ontology is developed to capture the attributes of these weaknesses. These include their attack method and attack vectors. Also described is the impact of the weaknesses to software quality attributes. Additionally, the technologies that are susceptible to each weakness are presented, they include programming languages, frameworks, communication protocols, and data formats

    Evaluation of Web vulnerability scanners based on OWASP benchmark

    Get PDF
    Web applications have become an integral part of everyday life, but many of these applications are deployed with critical vulnerabilities that can be fatally exploited. Web Vulnerability scanners have been widely adopted for the detection of vulnerabilities in web applications by checking through the applications with the attackers’ perspectives. However, studies have shown that vulnerability scanners perform differently on detection of vulnerabilities. Furthermore, the effectiveness of some of these scanners has become questionable due to the ever-growing cyber-attacks that have been exploiting undetected vulnerabilities in some web applications. To evaluate the effectiveness of these scanners, people often run these scanners against a benchmark web application with known vulnerabilities. This thesis first presents our results on the effectiveness of two popular web vulnerability scanners based on the OWASP benchmark, which is a benchmark developed by OWASP (Open Web Application Security Project), a prestigious non-profit web security organization. The two scanners chosen in this thesis are OWASP Zed Attack Proxy (OWASP ZAP) and Arachni. As there are many categories of web vulnerabilities and we cannot evaluate the scanner performance on all of them due to time limitation, we pick the following four major vulnerability categories in our thesis: Command Injection, Cross-Site Scripting (XSS), Light Weight Access Protocol (LDAP) Injection, and SQL Injection. Moreover, we compare our results on scanner effectiveness from the OWASP benchmark with the existing results from Web Application Vulnerability Security Evaluation Project (WAVSEP) benchmark, another popular benchmark used to evaluate scanner effectiveness. We are the first to make this comparison between these two benchmarks in literature. The results mainly show that: - Scanners perform differently in different vulnerability categories. That is, no scanner can serve as the all-rounder in scanning web vulnerabilities. - The benchmarks also demonstrate different capabilities in reflecting the effectiveness of scanners in different vulnerability categories. It is recommended to combine the results from different benchmarks to determine the effectiveness of a scanner. - Regarding scanner effectiveness, OWASP ZAP performs the best in CMDI, SQLI, and XSS; Arachni performs the best in LDAP. - Regarding benchmark capability, OWASP benchmark outperforms WAVSEP benchmark in all the examined categories
    corecore