5 research outputs found

    HoneyDOC: An Efficient Honeypot Architecture Enabling All-Round Design

    Full text link
    Honeypots are designed to trap the attacker with the purpose of investigating its malicious behavior. Owing to the increasing variety and sophistication of cyber attacks, how to capture high-quality attack data has become a challenge in the context of honeypot area. All-round honeypots, which mean significant improvement in sensibility, countermeasure and stealth, are necessary to tackle the problem. In this paper, we propose a novel honeypot architecture termed HoneyDOC to support all-round honeypot design and implementation. Our HoneyDOC architecture clearly identifies three essential independent and collaborative modules, Decoy, Captor and Orchestrator. Based on the efficient architecture, a Software-Defined Networking (SDN) enabled honeypot system is designed, which supplies high programmability for technically sustaining the features for capturing high-quality data. A proof-of-concept system is implemented to validate its feasibility and effectiveness. The experimental results show the benefits by using the proposed architecture comparing to the previous honeypot solutions.Comment: Non

    A Brave New World: Studies on the Deployment and Security of the Emerging IPv6 Internet.

    Full text link
    Recent IPv4 address exhaustion events are ushering in a new era of rapid transition to the next generation Internet protocol---IPv6. Via Internet-scale experiments and data analysis, this dissertation characterizes the adoption and security of the emerging IPv6 network. The work includes three studies, each the largest of its kind, examining various facets of the new network protocol's deployment, routing maturity, and security. The first study provides an analysis of ten years of IPv6 deployment data, including quantifying twelve metrics across ten global-scale datasets, and affording a holistic understanding of the state and recent progress of the IPv6 transition. Based on cross-dataset analysis of relative global adoption rates and across features of the protocol, we find evidence of a marked shift in the pace and nature of adoption in recent years and observe that higher-level metrics of adoption lag lower-level metrics. Next, a network telescope study covering the IPv6 address space of the majority of allocated networks provides insight into the early state of IPv6 routing. Our analyses suggest that routing of average IPv6 prefixes is less stable than that of IPv4. This instability is responsible for the majority of the captured misdirected IPv6 traffic. Observed dark (unallocated destination) IPv6 traffic shows substantial differences from the unwanted traffic seen in IPv4---in both character and scale. Finally, a third study examines the state of IPv6 network security policy. We tested a sample of 25 thousand routers and 520 thousand servers against sets of TCP and UDP ports commonly targeted by attackers. We found systemic discrepancies between intended security policy---as codified in IPv4---and deployed IPv6 policy. Such lapses in ensuring that the IPv6 network is properly managed and secured are leaving thousands of important devices more vulnerable to attack than before IPv6 was enabled. Taken together, findings from our three studies suggest that IPv6 has reached a level and pace of adoption, and shows patterns of use, that indicates serious production employment of the protocol on a broad scale. However, weaker IPv6 routing and security are evident, and these are leaving early dual-stack networks less robust than the IPv4 networks they augment.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120689/1/jczyz_1.pd

    Data Reduction for the Scalable Automated Analysis of Distributed Darknet Traffic

    No full text
    Threats to the privacy of users and to the availability of Internet infrastructure are evolving at a tremendous rate. To characterize these emerging threats, researchers must effectively balance monitoring the large number of hosts needed to quickly build confidence in new attacks, while still preserving the detail required to differentiate these attacks. One class of techniques that attempts to achieve this balance involves hybrid systems that combine the scalable monitoring of unused address blocks (or darknets) with forensic honeypots (or honeyfarms). In this paper we examine the properties of individual and distributed darknets to determine the effectiveness of building scalable hybrid systems. We show that individual darknets are dominated by a small number of sources repeating the same actions. This enables source-based techniques to be effective at reducing the number of connections to be evaluated by over 90%. We demonstrate that the dominance of locally targeted attack behavior and the limited life of random scanning hosts result in few of these sources being repeated across darknets. To achieve reductions beyond source-based approaches, we look to source-distribution based methods and expand them to include notions of local and global behavior. We show that this approach is effective at reducing the number of events by deploying it in 30 production networks during early 2005. Each of the identified events during this period represented a major globally-scoped attack including the WINS vulnerability scanning, Veritas Backup Agent vulnerability scanning, and the MySQL Worm.

    Context-Aware Network Security.

    Full text link
    The rapid growth in malicious Internet activity, due to the rise of threats like automated worms, viruses, and botnets, has driven the development of tools designed to protect host and network resources. One approach that has gained significant popularity is the use of network based security systems. These systems are deployed on the network to detect, characterize and mitigate both new and existing threats. Unfortunately, these systems are developed and deployed in production networks as generic systems and little thought has been paid to customization. Even when it is possible to customize these devices, the approaches for customization are largely manual or ad hoc. Our observation of the production networks suggest that these networks have significant diversity in end-host characteristics, threat landscape, and traffic behavior -- a collection of features that we call the security context of a network. The scale and diversity in security context of production networks make manual or ad hoc customization of security systems difficult. Our thesis is that automated adaptation to the security context can be used to significantly improve the performance and accuracy of network-based security systems. In order to evaluate our thesis, we explore a system from three broad categories of network-based security systems: known threat detection, new threat detection, and reputation-based mitigation. For known threat detection, we examine a signature-based intrusion detection system and show that the system performance improves significantly if it is aware of the signature set and the traffic characteristics of the network. Second, we explore a large collection of honeypots (or honeynet) that are used to detect new threats. We show that operating system and application configurations in the network impact honeynet accuracy and adapting to the surrounding network provides a significantly better view of the network threats. Last, we apply our context-aware approach to a reputation-based system for spam blacklist generation and show how traffic characteristics on the network can be used to significantly improve its accuracy. We conclude with the lessons learned from our experiences adapting to network security context and the future directions for adapting network-based security systems to the security context.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64745/1/sushant_1.pd

    A framework for malicious host fingerprinting using distributed network sensors

    Get PDF
    Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area
    corecore