5 research outputs found

    XSS-haavoittuvuudet ja niiden havaitseminen web-sovelluksissa

    Get PDF
    Internet ja web-sovellukset ovat olennainen osa nykymaailmaa ja jatkuvan suosion myötä niiden tietoturva on yhä tärkeämmässä asemassa. Cross-site Scripting (XSS) on yksi yleisimmistä web-sovellusten haavoittuvuuksista. XSS-haavoittuvuutta hyödyntävällä hyökkäyksellä voidaan muokata web-sivuston sisältöä mielivaltaisesti, kaapata käyttäjän selainistunto tai varastaa käyttäjätietoja. Tässä tutkielmassa käsitellään menetelmiä XSS-haavoittuvuuksien havaitsemiseen web-sovelluksissa. Jos haavoittuvuudet voidaan havaita ajoissa, ne voidaan korjata ennen kuin niitä hyväksikäytetään hyökkäyksissä. Tutkielmassa käsitellään ja vertaillaan XSS-haavoittuvuuksien havaitsemismenetelmiä. Tutkimus toteutetaan kirjallisuuskatsauksena. Menetelmät on jaettu niiden käyttämän analyysimenetelmän perusteella kolmeen kategoriaan, jotka ovat staattinen analyysi, dynaaminen analyysi ja hybridianalyysi. Käsitellyillä menetelmillä on mahdollista havaita XSS-haavoittuvuuksia web-sovelluksissa tarkasti ja kattavasti. Uusimmissa menetelmissä hyödynnetään syväoppimista, vahvistusoppimista ja geneettisiä algoritmeja. Menetelmien kyky havaita haavoittuvuuksia on parantunut jatkuvasti vuoden 2015 jälkeen. Menetelmät ovat edistyneitä, mutta ne ovat usein rajoittuneita toimimaan vain tiettyjen ympäristöjen tai ohjelmointikielten kanssa. Lisäksi monet menetelmistä kykenevät havaitsemaan vain yhdenlaisia XSS-haavoittuvuuksia. Näihin ongelmiin olisi hyvä pyrkiä löytämään ratkaisuja tulevaisuudessa

    Clarity: Analysing security in web applications

    Get PDF
    The rapid rise in business' moving online has resulted in e-commerce web applications becoming increasingly targeted by hackers. This paper proposes Clarity, a dynamic black box vulnerability scanner capable of detecting Cross-Site Scripting, SQL Injection, HTTP Response Splitting, and Session Management vulnerabilities in web applications. The developed tool employs the use of Mechanize and Selenium to perform the majority of its web scraping requirements. Clarity was tested against 50 e-commerce web applications, uncovering Session Management flaws as the most prevalent vulnerability, with 36 out of the 50 applications being vulnerable

    Cyber Threat Intelligence-Based Malicious URL Detection Model Using Ensemble Learning

    Get PDF
    Web applications have become ubiquitous for many business sectors due to their platform independence and low operation cost. Billions of users are visiting these applications to accomplish their daily tasks. However, many of these applications are either vulnerable to web defacement attacks or created and managed by hackers such as fraudulent and phishing websites. Detecting malicious websites is essential to prevent the spreading of malware and protect end-users from being victims. However, most existing solutions rely on extracting features from the website’s content which can be harmful to the detection machines themselves and subject to obfuscations. Detecting malicious Uniform Resource Locators (URLs) is safer and more efficient than content analysis. However, the detection of malicious URLs is still not well addressed due to insufficient features and inaccurate classification. This study aims at improving the detection accuracy of malicious URL detection by designing and developing a cyber threat intelligence-based malicious URL detection model using two-stage ensemble learning. The cyber threat intelligence-based features are extracted from web searches to improve detection accuracy. Cybersecurity analysts and users reports around the globe can provide important information regarding malicious websites. Therefore, cyber threat intelligence-based (CTI) features extracted from Google searches and Whois websites are used to improve detection performance. The study also proposed a two-stage ensemble learning model that combines the random forest (RF) algorithm for preclassification with multilayer perceptron (MLP) for final decision making. The trained MLP classifier has replaced the majority voting scheme of the three trained random forest classifiers for decision making. The probabilistic output of the weak classifiers of the random forest was aggregated and used as input for the MLP classifier for adequate classification. Results show that the extracted CTI-based features with the two-stage classification outperform other studies’ detection models. The proposed CTI-based detection model achieved a 7.8% accuracy improvement and 6.7% reduction in false-positive rates compared with the traditional URL-based model

    Multi-modal Features Representation-based Convolutional Neural Network Model for Malicious Website Detection

    Get PDF
    Web applications have proliferated across various business sectors, serving as essential tools for billions of users in their daily lives activities. However, many of these applications are malicious which is a major threat to Internet users as they can steal sensitive information, install malware, and propagate spam. Detecting malicious websites by analyzing web content is ineffective due to the complexity of extraction of the representative features, the huge data volume, the evolving nature of the malicious patterns, the stealthy nature of the attacks, and the limitations of traditional classifiers. Uniform Resource Locators (URL) features are static and can often provide immediate insights about the website without the need to load its content. However, existing solutions for detecting malicious web applications through web content analysis often struggle due to complex feature extraction, massive data volumes, evolving attack patterns, and limitations of traditional classifiers. Leveraging solely lexical URL features proves insufficient, potentially leading to inaccurate classifications. This study proposes a multimodal representation approach that fuses textual and image-based features to enhance the performance of the malicious website detection. Textual features facilitate the deep learning model’s ability to understand and represent detailed semantic information related to attack patterns, while image features are effective in recognizing more general malicious patterns. In doing so, patterns that are hidden in textual format may be recognizable in image format. Two Convolutional Neural Network (CNN) models were constructed to extract the hidden features from both textual and image-represented features. The output layers of both models were combined and used as input for an artificial neural network classifier for decision-making. Results show the effectiveness of the proposed model when compared to other models. The overall performance in terms of Matthews..

    Website Vulnerability Analysis of AB and XY Office in East Java

    Get PDF
    Study this aim for analyze and identify vulnerability existing security on AB and XY Service Websites in East Java. Contribution study this is give more understanding deep about type vulnerability specific security and its impact to field website security. Method research used involve data scanning, analysis vulnerabilities, and Brute Force experiments. A total of 2 samples of AB and XY Service Websites were analyzed For identify existing vulnerabilities the data. However so, necessary noted that method study this own a number of limitations. First, size sample used possible limited to AB and XY Service Websites in East Java only, so generalization results study against other websites needs done with be careful. Second, analysis statistics used only covers analysis descriptive, so study this not yet investigate linkages between existing variables. Although thus, results study show exists necessary weaknesses and vulnerabilities corrected on AB and XY Service Websites. A number of findings covers problem website configuration and handling vulnerability that is not adequate. With highlight specific susceptibility, research this give more understanding deep about threat security faced by AB and XY Service Websites. In context field website security, research this own implication important. With understand existing vulnerabilities on AB and XY Service Websites, steps repair proper security can take for protect sensitive data and improve protection security in a manner whole. Kindly whole, research this identify and analyze vulnerability security on AB and XY Service Websites, as well give more understanding Specific about type existing vulnerabilities. Although there are limitations in method study this is the result still give valuable insight in field website security and can become base for repair more security effective and more data protection on both the AB and XY Service Websites
    corecore