78,524 research outputs found

    Structural Learning of Attack Vectors for Generating Mutated XSS Attacks

    Full text link
    Web applications suffer from cross-site scripting (XSS) attacks that resulting from incomplete or incorrect input sanitization. Learning the structure of attack vectors could enrich the variety of manifestations in generated XSS attacks. In this study, we focus on generating more threatening XSS attacks for the state-of-the-art detection approaches that can find potential XSS vulnerabilities in Web applications, and propose a mechanism for structural learning of attack vectors with the aim of generating mutated XSS attacks in a fully automatic way. Mutated XSS attack generation depends on the analysis of attack vectors and the structural learning mechanism. For the kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the structure of the attack vector model to capture the implicit manner of the attack vector, and this manner is benefited from the syntax meanings that are labeled by the proposed tokenizing mechanism. Bayes theorem is used to determine the number of hidden states in the model for generalizing the structure model. The paper has the contributions as following: (1) automatically learn the structure of attack vectors from practical data analysis to modeling a structure model of attack vectors, (2) mimic the manners and the elements of attack vectors to extend the ability of testing tool for identifying XSS vulnerabilities, (3) be helpful to verify the flaws of blacklist sanitization procedures of Web applications. We evaluated the proposed mechanism by Burp Intruder with a dataset collected from public XSS archives. The results show that mutated XSS attack generation can identify potential vulnerabilities.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Building in web application security at the requirements stage : a tool for visualizing and evaluating security trade-offs : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Systems at Massey University, Albany, New Zealand

    Get PDF
    One dimension of Internet security is web application security. The purpose of this Design-science study was to design, build and evaluate a computer-based tool to support security vulnerability and risk assessment in the early stages of web application design. The tool facilitates risk assessment by managers and helps developers to model security requirements using an interactive tree diagram. The tool calculates residual risk for each component of a web application and for the application overall so developers are provided with better information for making decisions about which countermeasures to implement given limited resources tor doing so. The tool supports taking a proactive approach to building in web application security at the requirements stage as opposed to the more common reactive approach of putting countermeasures in place after an attack and loss have been incurred. The primary contribution of the proposed tool is its ability to make known security-related information (e.g. known vulnerabilities, attacks and countermeasures) more accessible to developers who are not security experts and to translate lack of security measures into an understandable measure of relative residual risk. The latter is useful for managers who need to prioritize security spending. Keywords: web application security, security requirements modelling, attack trees, threat trees, risk assessment
    corecore