48 research outputs found

    An Automated Methodology for Validating Web Related Cyber Threat Intelligence by Implementing a Honeyclient

    Get PDF
    Loodud töö panustab küberkaitse valdkonda pakkudes alternatiivse viisi, kuidas hoida ohuteadmus andmebaas uuendatuna. Veebilehti kasutatakse ära viisina toimetada pahatahtlik kood ohvrini. Peale veebilehe klassifitseerimist pahaloomuliseks lisatakse see ohuteadmus andmebaasi kui pahaloomulise indikaatorina. Lõppkokkuvõtteks muutuvad sellised andmebaasid mahukaks ja sisaldavad aegunud kirjeid. Lahendus on automatiseerida aegunud kirjete kontrollimist klient-meepott tarkvaraga ning kogu protsess on täielikult automatiseeritav eesmärgiga hoida kokku aega. Jahtides kontrollitud ja kinnitatud indikaatoreid aitab see vältida valedel alustel küberturbe intsidentide menetlemist.This paper is contributing to the open source cybersecurity community by providing an alternative methodology for analyzing web related cyber threat intelligence. Websites are used commonly as an attack vector to spread malicious content crafted by any malicious party. These websites become threat intelligence which can be stored and collected into corresponding databases. Eventually these cyber threat databases become obsolete and can lead to false positive investigations in cyber incident response. The solution is to keep the threat indicator entries valid by verifying their content and this process can be fully automated to keep the process less time consuming. The proposed technical solution is a low interaction honeyclient regularly tasked to verify the content of the web based threat indicators. Due to the huge amount of database entries, this way most of the web based threat indicators can be automatically validated with less time consumption and they can be kept relevant for monitoring purposes and eventually can lead to avoiding false positives in an incident response processes

    Design and Implementation of Virtual Client Honeypot

    Get PDF
    Abstract-Computers security has become a major issue in many organization. There are different solutions to response to this needs but they remain insufficient to truly secure network. Honeypot is used in the area of computer and Internet Security. It is resource which is intended to be attacked and comprised to gain more information about the attacker and their attack techniques. Compared to an intrusion detection system, Honeypots have the big advantage that they do not generate false alerts as all traffic is suspicious, because no productive components are running on the system. Client Honeypot is a honeypot actively searches for malicious sites on the web. In this paper, we design and implement virtual Client Honeypot to collect the internet malwares

    Internet Sensor Grid: Experiences with Passive and Active Instruments

    Full text link
    The Internet is constantly evolving with new emergent behaviours arising; some of them malicious. This paper discusses opportunities and research direction in an Internet sensor grid for malicious behaviour detection, analysis and countermeasures. We use two example sensors as a basis; firstly the honeyclient for malicious server and content identification (i.e. drive-by-downloads, the most prevalent attack vector for client systems) and secondly the network telescope for Internet Background Radiation detection (IBR - which is classified as unsolicited, non-productive traffic that traverses the Internet, often malicious in nature or origin). Large amounts of security data can be collected from such sensors for analysis and federating honeyclient and telescope data provides a worldwide picture of attacks that could enable the provision of countermeasures. In this paper we outline some experiences with these sensors and analyzing network telescope data through Grid computing as part of an “intelligence layer” within the Internet

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    I see EK: A lightweight technique to reveal exploit kit family by overall URL patterns of infection chains

    Get PDF
    The prevalence and nonstop evolving technical sophistication of exploit kits (EKs) is one of the most challenging shifts in the modern cybercrime landscape. Over the last few years, malware infections via drive-by download attacks have been orchestrated with EK infrastructures. Malicious advertisements and compromised websites redirect victim browsers to web-based EK families that are assembled to exploit client-side vulnerabilities and finally deliver evil payloads. A key observation is that while the webpage contents have drastic differences between distinct intrusions executed through the same EK, the patterns in URL addresses stay similar. This is due to the fact that autogenerated URLs by EK platforms follow specific templates. This practice in use enables the development of an efficient system that is capable of classifying the responsible EK instances. This paper proposes novel URL features and a new technique to quickly categorize EK families with high accuracy using machine learning algorithms. Rather than analyzing each URL individually, the proposed overall URL patterns approach examines all URLs associated with an EK infection automatically. The method has been evaluated with a popular and publicly available dataset that contains 240 different real-world infection cases involving over 2250 URLs, the incidents being linked with the 4 major EK flavors that occurred throughout the year 2016. The system achieves up to 100% classification accuracy with the tested estimators

    ANALYSIS OF CLIENT-SIDE ATTACKS THROUGH DRIVE-BY HONEYPOTS

    Get PDF
    Client-side cyberattacks on Web browsers are becoming more common relative to server-side cyberattacks. This work tested the ability of the honeypot (decoy) client software Thug to detect malicious or compromised servers that secretly download malicious files to clients, and to classify what it downloaded. Prior to using Thug we did TCP/IP fingerprinting to assess Thug’s ability to impersonate different Web browsers, and we created our own malicious Web server with some drive-by exploits to verify Thug’s functions; Thug correctly identified 85 out of 86 exploits from this server. We then tested Thug’s analysis of delivered exploits from two sets of real Web servers; one set was obtained from random Internet addresses of Web servers, and the other came from a commercial blacklist. The rates of malicious activity on 37,415 random websites and 83,667 blacklisted websites were 5.6% and 1.15%, respectively. Thug’s interaction with the blacklisted Web servers found 163 unique malware files. We demonstrated the usefulness and efficiency of client-side honeypots in analyzing harmful data presented by malicious websites. These honeypots can help government and industry defenders to proactively identify suspicious Web servers and protect users.OUSD(R&E)Outstanding ThesisLieutenant, United States NavyApproved for public release. Distribution is unlimited

    Web感染型攻撃における潜在的特徴の解析法

    Get PDF
    早大学位記番号:新7789早稲田大

    Analyzing and Defending Against Evolving Web Threats

    Get PDF
    The browser has evolved from a simple program that displays static web pages into a continuously-changing platform that is shaping the Internet as we know it today. The fierce competition among browser vendors has led to the introduction of a plethora of features in the past few years. At the same time, it remains the de facto way to access the Internet for billions of users. Because of such rapid evolution and wide popularity, the browser has attracted attackers, who pose new threats to unsuspecting Internet surfers.In this dissertation, I present my work on securing the browser againstcurrent and emerging threats. First, I discuss my work on honeyclients,which are tools that identify malicious pages that compromise the browser, and how one can evade such systems. Then, I describe a new system that I built, called Revolver, that automatically tracks the evolution of JavaScriptand is capable of identifying evasive web-based malware by finding similarities in JavaScript samples with different classifications. Finally, I present Hulk, a system that automatically analyzes and classifies browser extensions

    Malware Distributed Collection And Pre-classification System Using Honeypot Technology

    Get PDF
    Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes. © 2009 SPIE.7344Grossman, J., Niedzialkowski, T.C., Hacking Intranet Websites from the Outside - Javascript malware just got a lot more dangerous (2006) Black Hat, , http://www.blackhat.com/presentations/bhusa-06/BH-US-06-Grossman.pdf, USA, Las Vegas, Available atWhitehouse, O., An Analysis of Address Space Layout Randomization on Windows Vista (2007) Symantec Advanced Threat Research, , http://www.symantec.com/avcenter/reference/Address-Space-Layout-Randomization.pdf, White paper available atWhitehouse, O., Analysis of GS Protections in Microsoft Windows Vista (2007) Symantec Advanced Threat Research, , http://www.symantec.com/avcenter/reference/GS-Protections-in-Vista.pdf, White paper available atMcDermott, J., Fox, C., Using abuse cases models for security requirement analysis (1999) Proceedings of the 15th Annual Computer Security Applications Conference, p. 55. , IEEE Computer Society, ISBN:0-7695-0346-2Collection, , http://Nepenthes.carnivore.it, Available at:, Accessed on January 2009Baecher, P., The Nepenthes Platform: An Efficient Approach to Collect Malware (2006) Recent Advances in Intrusion Detection, pp. 165-184. , Springer Berlin, HeidelbergHoneytrap, , http://honeytrap.mwcollect.org, Available at:, Accessed on January 2009Zhuge, J., Holz, T., Han, X., Song, C., Zou, W., Collecting Autonomous Spreading Malware Using Highinteraction Honeypots (2007) Proceedings of 9th International Conference on Information and Communications Security (ICICS'07), , Zhengzhou, China, DecemberProvos, N., Holz, T., (2007) Virtual Honeypots: From Botnet Tracking to Intrusion Detection, , Addison Wesley, ISBN: 0-321-33632-1Seifert, C., Welch, I., Komisarczuk, P., HoneyC - The Low-Interaction Client Honeypot (2007) Proceedings of the 2007 NZCSRCS, , Waikato University, Hamilton, New Zealand, AprilNazario, J., Phoneyc, , http://svn.carnivore.it/browser/phoneyc, Accessed on January 2009Seifert, C., Steenson, R., Holz, T., Yuan, B., Davis, M.A., Know Your Enemy: Malicious Web Servers, , http://www.honeynet.org/papers/mws, Available at:, Accessed on January 2009Spitzner, L., (2002) Honeypots: Tracking Hackers, , Addison Wesley, ISBN: 0-321-10895-1http://www.honeypots-alliance.org.br, Brazilian Honeypots Alliance, Distributed Honeypots Project. Available at:, Accessed on January 200
    corecore