23,679 research outputs found

    Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities

    Get PDF
    Vulnerable web applications fundamentally undermine website security as they often expose critical infrastructures and sensitive information behind them to potential risks and threats. Web applications with unrestricted file upload vulnerabilities allow attackers to upload a file with malicious code, which can be later executed on the server by attackers to enable various attacks such as information exfiltration, spamming, phishing, and spreading malware. This dissertation presents our research in building two novel frameworks to detect server-side applications vulnerable to unrestricted file uploading attacks. We design the innovative model that holistically characterizes both data and control flows using a graphbased data structure. Such a model makes effortless critical program analysis mechanisms, such as static analysis and constraint modeling. We build the interpreter to model a web program by symbolically interpreting its abstract syntax tree (AST). Our research has led to three complementary systems that can effectively detect unrestricted file uploading vulnerabilities. The first system, namely UChecker, leverages satisfiability modulo theory to perform detection, whereas the second system, namely UFuzzer, detects such vulnerability by intelligently synthesizing code snippets and performing fuzzing. We also proposed the third system to mitigate the challenge of path explosion that the previous two systems suffered and enable a computationally efficient model generation process for large programs. We have deployed all of our systems, namely UGraph, to scan many server-side applications. They have identified 49 vulnerable PHP-based web applications that are previously unknown, including 11 CVEs

    A Detection of Malware Embedded into Web Pages Using Client Honeypot

    Get PDF
    In today’s Internet world, web pages are facing a severe threat which uses the client-side browser attacks. The vulnerability-based attacks are based on client-side application which becomes the major threat to web pages. The spread of malware uses software vulnerabilities which attack the client application sending request to the server if whether the attack has occurred. This detection is based on client honeypot which detects the various malicious program linked with web pages. Client honeypots are active security devices in search of malicious servers that attack clients. The client honeypot poses as a client and interacts with the server to examine whether an attack has occurred. Often the focus of client honeypots is on web browsers, but any client that interacts with servers can be part of client honeypot. In this research paper, we propose a model of detecting embedded web pages using client honeypot

    An investigation into server-side static and dynamic web content survivability using a web content verification and recovery (WVCR) system

    Get PDF
    A malicious web content manipulation software can be used to tamper with any type of web content (e.g., text, images, video, audio and objects), and as a result, organisations are vulnerable to data loss. In addition, several security incident reports from emergency response teams such as CERT and AusCERT clearly demonstrate that the available security mechanisms have not made system break-ins impossible. Therefore, ensuring web content integrity against unauthorised tampering has become a major issue. This thesis investigates the survivability of server-side static and dynamic web content using the Web Content Verification and Recovery (WCVR) system. We have developed a novel security system architecture which provides mechanisms to address known security issues such as violation of data integrity that arise in tampering attacks. We propose a real-time web security framework consisting of a number of components that can be used to verify the server-side static and dynamic web content, and to recover the original web content if the requested web content has been compromised. A conceptual model to extract the client interaction elements, and a strategy to utilise the hashing performance have been formulated in this research work. A prototype of the solution has been implemented and experimental studies have been carried out to address the security and the performance objectives. The results indicate that the WCVR system can provide a tamper detection, and recovery to server-side static and dynamic web content. We have also shown that overhead for the verification and recovery processes are relatively low and the WCVR system can efficiently and correctly determine if the web content has been tampered with

    Adaptive Traffic Fingerprinting for Darknet Threat Intelligence

    Full text link
    Darknet technology such as Tor has been used by various threat actors for organising illegal activities and data exfiltration. As such, there is a case for organisations to block such traffic, or to try and identify when it is used and for what purposes. However, anonymity in cyberspace has always been a domain of conflicting interests. While it gives enough power to nefarious actors to masquerade their illegal activities, it is also the cornerstone to facilitate freedom of speech and privacy. We present a proof of concept for a novel algorithm that could form the fundamental pillar of a darknet-capable Cyber Threat Intelligence platform. The solution can reduce anonymity of users of Tor, and considers the existing visibility of network traffic before optionally initiating targeted or widespread BGP interception. In combination with server HTTP response manipulation, the algorithm attempts to reduce the candidate data set to eliminate client-side traffic that is most unlikely to be responsible for server-side connections of interest. Our test results show that MITM manipulated server responses lead to expected changes received by the Tor client. Using simulation data generated by shadow, we show that the detection scheme is effective with false positive rate of 0.001, while sensitivity detecting non-targets was 0.016+-0.127. Our algorithm could assist collaborating organisations willing to share their threat intelligence or cooperate during investigations.Comment: 26 page

    The zombies strike back: Towards client-side beef detection

    Get PDF
    A web browser is an application that comes bundled with every consumer operating system, including both desktop and mobile platforms. A modern web browser is complex software that has access to system-level features, includes various plugins and requires the availability of an Internet connection. Like any multifaceted software products, web browsers are prone to numerous vulnerabilities. Exploitation of these vulnerabilities can result in destructive consequences ranging from identity theft to network infrastructure damage. BeEF, the Browser Exploitation Framework, allows taking advantage of these vulnerabilities to launch a diverse range of readily available attacks from within the browser context. Existing defensive approaches aimed at hardening network perimeters and detecting common threats based on traffic analysis have not been found successful in the context of BeEF detection. This paper presents a proof-of-concept approach to BeEF detection in its own operating environment – the web browser – based on global context monitoring, abstract syntax tree fingerprinting and real-time network traffic analysis

    BlackWatch:increasing attack awareness within web applications

    Get PDF
    Web applications are relied upon by many for the services they provide. It is essential that applications implement appropriate security measures to prevent security incidents. Currently, web applications focus resources towards the preventative side of security. Whilst prevention is an essential part of the security process, developers must also implement a level of attack awareness into their web applications. Being able to detect when an attack is occurring provides applications with the ability to execute responses against malicious users in an attempt to slow down or deter their attacks. This research seeks to improve web application security by identifying malicious behaviour from within the context of web applications using our tool BlackWatch. The tool is a Python-based application which analyses suspicious events occurring within client web applications, with the objective of identifying malicious patterns of behaviour. This approach avoids issues typically encountered with traditional web application firewalls. Based on the results from a preliminary study, BlackWatch was effective at detecting attacks from both authenticated, and unauthenticated users. Furthermore, user tests with developers indicated BlackWatch was user friendly, and was easy to integrate into existing applications. Future work seeks to develop the BlackWatch solution further for public release

    Hyp3rArmor: reducing web application exposure to automated attacks

    Full text link
    Web applications (webapps) are subjected constantly to automated, opportunistic attacks from autonomous robots (bots) engaged in reconnaissance to discover victims that may be vulnerable to specific exploits. This is a typical behavior found in botnet recruitment, worm propagation, largescale fingerprinting and vulnerability scanners. Most anti-bot techniques are deployed at the application layer, thus leaving the network stack of the webapp’s server exposed. In this paper we present a mechanism called Hyp3rArmor, that addresses this vulnerability by minimizing the webapp’s attack surface exposed to automated opportunistic attackers, for JavaScriptenabled web browser clients. Our solution uses port knocking to eliminate the webapp’s visible network footprint. Clients of the webapp are directed to a visible static web server to obtain JavaScript that authenticates the client to the webapp server (using port knocking) before making any requests to the webapp. Our implementation of Hyp3rArmor, which is compatible with all webapp architectures, has been deployed and used to defend single and multi-page websites on the Internet for 114 days. During this time period the static web server observed 964 attempted attacks that were deflected from the webapp, which was only accessed by authenticated clients. Our evaluation shows that in most cases client-side overheads were negligible and that server-side overheads were minimal. Hyp3rArmor is ideal for critical systems and legacy applications that must be accessible on the Internet. Additionally Hyp3rArmor is composable with other security tools, adding an additional layer to a defense in depth approach.This work has been supported by the National Science Foundation (NSF) awards #1430145, #1414119, and #1012798

    X-Secure:protecting users from big bad wolves

    Get PDF
    In 2014 over 70% of people in Great Britain accessed the Internet every day. This resource is an optimal vector for malicious attackers to penetrate home computers and as such compromised pages have been increasing in both number and complexity. This paper presents X-Secure, a novel browser plug-in designed to present and raise the awareness of inexperienced users by analysing web-pages before malicious scripts are executed by the host computer. X-Secure was able to detect over 90% of the tested attacks and provides a danger level based on cumulative analysis of the source code, the URL, and the remote server, by using a set of heuristics, hence increasing the situational awareness of users browsing the internet
    • …
    corecore