32,495 research outputs found

    Tietoturvatyökalujen tuominen osaksi testausprosessia

    Get PDF
    Opinnäytetyö tehtiin Codemate-nimiselle suomalaiselle ohjelmistoyritykselle, joka tarvitsi hyvää vertailua nykyisistä tietoturvatestaukseen suunnitelluista työkaluista. Vertailun tavoitteena oli löytää ja valita sopivat työkalut, joita voidaan hyödyntää Codematen testausprosessissa. Vertailua oli tarkoituksena tehdä lähinnä web-sovelluksiin suunnattuihin työkaluihin, mutta lisäksi mukaan otettiin myös muutamia verkko- ja palvelinskannereita ja Content Management System –skannereita. Vertailtavien työkalujen valintaan käytettiin muutamia kriteerejä. Tärkeimpinä kriteereinä olivat työkalun haavoittuvuuksien löytökyky, työkalun käytettävyys, raporttien laatu ja selkeys sekä jatkuva kehitys tasaisilla päivityksillä. Kaikista työhön valituista työkaluista kirjoitettiin selkeät ja laajat teoriaosuudet joissa tuotiin esille työkalujen perustiedot, tarkat tekniset ominaisuudet, käyttöliittymät sekä skannereiden tekemät haavoittuvuustestit. Toteutusosuudessa vertailtiin valittuja web-sovelluksiin suunnattuja työkaluja tarkemmin. Jokaisesta työkalusta vertailtiin hintaa, WAVSEP-tuloksia, OWASP top 10 kattavuutta, käytettävyyttä/ominaisuuksia ja päivityksiä. Tulososuudessa valittiin vertailluista työkaluista Codematelle sopivat ja maksullisille työkaluille valittiin myös ilmainen vaihtoehto. Valitut web-sovelluten automaattiset skannerit olivat Arachni, Acuentix, Tinfoil Security ja CMSmap. Valitut web-sovellusten manuaaliset testityökalut olivat Burp Suite ja OWASP ZAP. Web-sovellusten exploit-työkaluiksi valittiin SQLmap ja W3AF. Palvelin- ja verkkoskannereiksi valittiin OpenVAS sekä NMAP. Työtä tehdessä selvisi, että web-sovellusskannereille on todella vaikea tehdä luotettavia vertailutestejä johtuen haavoittuviksi tehtyjen web-sovellusten ja oikeiden web-sovellusten eroavaisuuksista. Yhdellä työkalulla ei myöskään tietoturvatestauksessa usein pärjää vaan parhaat tulokset saa, kun joka osa-alueeseen käyttää siihen suunniteltua työkalua.The Bachelor’s thesis was assigned by a Finnish software company named Codemate. Codemate needed a good comparison of web application security-testing tools. The goal with the comparison was to find the right tools for Codemate’s testing process. The comparison was to be done mostly between the web application security-testing tools and also some network scanners and Content Management System scanners were to be included. Few criteria were used to choose the tools for comparison. The most important criteria were the vulnerability detection ability and usability of the tool, the quality of reports and continuous development including updates. An extensive theory section was written on every tool which includes the basic info, technical features, available user interfaces and the security tests executed by the scanner. In the comparison section the web application testing tools were compared in more detail. Every tool was compared regarding its price, WAVSEP score, OWASP top 10 coverage, usability, features and updates. In the results section the suitable tools were chosen for Codemate and for every commercial tool there was also a free option. The chosen automatic scanners for web applications were Arachni, Acuentix, Tinfoil Security and CMSmap. The chosen manual tools for web applications were Burp Suite and OWAPS ZAP. The tools chosen for exploiting were SQLmap and W3AF. OpenVAS and NMAP were chosen for network scanning. During the thesis it became clear that a reliable comparison of web application security-scanners is not that easy because of the differences in designed to be vulnerable web applications and real world web applications. Also, one tool is rarely enough for security testing and often the best results are achieved by using the right tool for the part it is mostly designed for

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Model-Based Security Testing

    Full text link
    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Combining Static and Dynamic Analysis for Vulnerability Detection

    Full text link
    In this paper, we present a hybrid approach for buffer overflow detection in C code. The approach makes use of static and dynamic analysis of the application under investigation. The static part consists in calculating taint dependency sequences (TDS) between user controlled inputs and vulnerable statements. This process is akin to program slice of interest to calculate tainted data- and control-flow path which exhibits the dependence between tainted program inputs and vulnerable statements in the code. The dynamic part consists of executing the program along TDSs to trigger the vulnerability by generating suitable inputs. We use genetic algorithm to generate inputs. We propose a fitness function that approximates the program behavior (control flow) based on the frequencies of the statements along TDSs. This runtime aspect makes the approach faster and accurate. We provide experimental results on the Verisec benchmark to validate our approach.Comment: There are 15 pages with 1 figur

    SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities

    Full text link
    Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US

    Web Vulnerability Study of Online Pharmacy Sites

    Get PDF
    Consumers are increasingly using online pharmacies, but these sites may not provide an adequate level of security with the consumers’ personal data. There is a gap in this research addressing the problems of security vulnerabilities in this industry. The objective is to identify the level of web application security vulnerabilities in online pharmacies and the common types of flaws, thus expanding on prior studies. Technical, managerial and legal recommendations on how to mitigate security issues are presented. The proposed four-step method first consists of choosing an online testing tool. The next steps involve choosing a list of 60 online pharmacy sites to test, and then running the software analysis to compile a list of flaws. Finally, an in-depth analysis is performed on the types of web application vulnerabilities. The majority of sites had serious vulnerabilities, with the majority of flaws being cross-site scripting or old versions of software that have not been updated. A method is proposed for the securing of web pharmacy sites, using a multi-phased approach of technical and managerial techniques together with a thorough understanding of national legal requirements for securing systems
    corecore