12 research outputs found

    Large Scale Activity Monitoring for distributed honeynets

    Get PDF
    ISBN: 0-7695-2911-9International audienceThis paper proposes a new distributed monitoring approach based on the notion of centrality of a graph and its evolution in time. We consider an activity profiling method for a distributed monitoring platform and illustrate its usage in two different target deployments. The first one concerns the monitoring of a distributed honeynet, whilst the second deployment target is the monitoring of a large network telecope. The central concept underlying our work are the intersection graphs and a centrality based locality statistics. These graphs have not been used widely in the field of network security. The advantage of this method is that analyzing aggregated activity data is possible by considering the curve of the maximum locality statistics and that important change point moments are well identified

    Activity Monitoring for large honeynets and network telescopes

    Get PDF
    International audienceThis paper proposes a new distributed monitoring approach based on the notion of centrality of a graph and its evolution in time. We consider an activity profiling method for a distributed monitoring platform and illustrate its usage in two different target deployments. The first one concerns the monitoring of a distributed honeynet, while the second deployment target is the monitoring of a large network telescope. The central concept underlying our work are the intersection graphs and a centrality based locality statistics. These graphs have not been used widely in the field of network security. The advantage of this method is that analyzing aggregated activity data is possible by considering the curve of the maximum locality statistics and that important change point moments are well identified

    A Methodology For Intelligent Honeypot Deployment And Active Engagement Of Attackers

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2012The internet has brought about tremendous changes in the way we see the world, allowing us to communicate at the speed of light, and dramatically changing the face of business forever. Organizations are able to share their business strategies and sensitive or proprietary information across the globe in order to create a sense of cohesiveness. This ability to share information across the vastness of the internet also allows attackers to exploit these different avenues to steal intellectual property or gather information vital to the national security of an entire nation. As technology advances to include more devices accessing an organization's network and as more business is handled via the internet, attackers' opportunities increase daily. Honeypots were created in response to this cyber warfare. Honeypots provide a technique to gather information about attackers performing reconnaissance on a network or device without the voluminous logs obtained by the majority of intrusion detection systems. This research effort provides a methodology to dynamically generate context-appropriate honeynets. Administrators are able to modify the system to conform to the target environment and gather the information passively or through increasing degrees of active scanning. The information obtained during the process of scanning the environment aids the administrator in creating a network topology and understanding the flux of devices in the network. This research continues the effort to defend an organization's networks against the onslaught of attackers

    Models and Measures for Correlation in Cyber-Insurance

    Get PDF

    Darknet as a Source of Cyber Threat Intelligence: Investigating Distributed and Reflection Denial of Service Attacks

    Get PDF
    Cyberspace has become a massive battlefield between computer criminals and computer security experts. In addition, large-scale cyber attacks have enormously matured and became capable to generate, in a prompt manner, significant interruptions and damage to Internet resources and infrastructure. Denial of Service (DoS) attacks are perhaps the most prominent and severe types of such large-scale cyber attacks. Furthermore, the existence of widely available encryption and anonymity techniques greatly increases the difficulty of the surveillance and investigation of cyber attacks. In this context, the availability of relevant cyber monitoring is of paramount importance. An effective approach to gather DoS cyber intelligence is to collect and analyze traffic destined to allocated, routable, yet unused Internet address space known as darknet. In this thesis, we leverage big darknet data to generate insights on various DoS events, namely, Distributed DoS (DDoS) and Distributed Reflection DoS (DRDoS) activities. First, we present a comprehensive survey of darknet. We primarily define and characterize darknet and indicate its alternative names. We further list other trap-based monitoring systems and compare them to darknet. In addition, we provide a taxonomy in relation to darknet technologies and identify research gaps that are related to three main darknet categories: deployment, traffic analysis, and visualization. Second, we characterize darknet data. Such information could generate indicators of cyber threat activity as well as provide in-depth understanding of the nature of its traffic. Particularly, we analyze darknet packets distribution, its used transport, network and application layer protocols and pinpoint its resolved domain names. Furthermore, we identify its IP classes and destination ports as well as geo-locate its source countries. We further investigate darknet-triggered threats. The aim is to explore darknet inferred threats and categorize their severities. Finally, we contribute by exploring the inter-correlation of such threats, by applying association rule mining techniques, to build threat association rules. Specifically, we generate clusters of threats that co-occur targeting a specific victim. Third, we propose a DDoS inference and forecasting model that aims at providing insights to organizations, security operators and emergency response teams during and after a DDoS attack. Specifically, this work strives to predict, within minutes, the attacks’ features, namely, intensity/rate (packets/sec) and size (estimated number of compromised machines/bots). The goal is to understand the future short-term trend of the ongoing DDoS attacks in terms of those features and thus provide the capability to recognize the current as well as future similar situations and hence appropriately respond to the threat. Further, our work aims at investigating DDoS campaigns by proposing a clustering approach to infer various victims targeted by the same campaign and predicting related features. To achieve our goal, our proposed approach leverages a number of time series and fluctuation analysis techniques, statistical methods and forecasting approaches. Fourth, we propose a novel approach to infer and characterize Internet-scale DRDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring DDoS activities using darknet, this work shows that we can extract DoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DRDoS activities such as intensity, rate and geographic location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks and the expectation maximization and k-means clustering techniques in an attempt to identify campaigns of DRDoS attacks. Finally, we conclude this work by providing some discussions and pinpointing some future work

    Cost-effective Detection of Drive-by-Download Attacks with Hybrid Client Honeypots

    No full text
    With the increasing connectivity of and reliance on computers and networks, important aspects of computer systems are under a constant threat. In particular, drive-by-download attacks have emerged as a new threat to the integrity of computer systems. Drive-by-download attacks are clientside attacks that originate fromweb servers that are visited byweb browsers. As a vulnerable web browser retrieves a malicious web page, the malicious web server can push malware to a user's machine that can be executed without their notice or consent. The detection of malicious web pages that exist on the Internet is prohibitively expensive. It is estimated that approximately 150 million malicious web pages that launch drive-by-download attacks exist today. Socalled high-interaction client honeypots are devices that are able to detect these malicious web pages, but they are slow and known to miss attacks. Detection ofmaliciousweb pages in these quantitieswith client honeypots would cost millions of US dollars. Therefore, we have designed a more scalable system called a hybrid client honeypot. It consists of lightweight client honeypots, the so-called low-interaction client honeypots, and traditional high-interaction client honeypots. The lightweight low-interaction client honeypots inspect web pages at high speed and forward only likely malicious web pages to the high-interaction client honeypot for a final classification. For the comparison of client honeypots and evaluation of the hybrid client honeypot system, we have chosen a cost-based evaluation method: the true positive cost curve (TPCC). It allows us to evaluate client honeypots against their primary purpose of identification of malicious web pages. We show that costs of identifying malicious web pages with the developed hybrid client honeypot systems are reduced by a factor of nine compared to traditional high-interaction client honeypots. The five main contributions of our work are: High-Interaction Client Honeypot The first main contribution of our work is the design and implementation of a high-interaction client honeypot Capture-HPC. It is an open-source, publicly available client honeypot research platform, which allows researchers and security professionals to conduct research on malicious web pages and client honeypots. Based on our client honeypot implementation and analysis of existing client honeypots, we developed a component model of client honeypots. This model allows researchers to agree on the object of study, allows for focus of specific areas within the object of study, and provides a framework for communication of research around client honeypots. True Positive Cost Curve As mentioned above, we have chosen a cost-based evaluationmethod to compare and evaluate client honeypots against their primary purpose of identification ofmaliciousweb pages: the true positive cost curve. It takes into account the unique characteristics of client honeypots, speed, detection accuracy, and resource cost and provides a simple, cost-based mechanism to evaluate and compare client honeypots in an operating environment. As such, the TPCC provides a foundation for improving client honeypot technology. The TPCC is the second main contribution of our work. Mitigation of Risks to the Experimental Design with HAZOP - Mitigation of risks to internal and external validity on the experimental design using hazard and operability (HAZOP) study is the third main contribution. This methodology addresses risks to intent (internal validity) as well as generalizability of results beyond the experimental setting (external validity) in a systematic and thorough manner. Low-Interaction Client Honeypots - Malicious web pages are usually part of a malware distribution network that consists of several servers that are involved as part of the drive-by-download attack. Development and evaluation of classification methods that assess whether a web page is part of a malware distribution network is the fourth main contribution. Hybrid Client Honeypot System - The fifth main contribution is the hybrid client honeypot system. It incorporates the mentioned classification methods in the form of a low-interaction client honeypot and a high-interaction client honeypot into a hybrid client honeypot systemthat is capable of identifying malicious web pages in a cost effective way on a large scale. The hybrid client honeypot system outperforms a high-interaction client honeypot with identical resources and identical false positive rate

    Achieving Highly Reliable Embedded Software: An Empirical Evaluation of Different Approaches

    Full text link
    corecore