19 research outputs found

    Forensic analysis of autonomous system reachability

    Full text link

    Preventing DDoS using Bloom Filter: A Survey

    Full text link
    Distributed Denial-of-Service (DDoS) is a menace for service provider and prominent issue in network security. Defeating or defending the DDoS is a prime challenge. DDoS make a service unavailable for a certain time. This phenomenon harms the service providers, and hence, loss of business revenue. Therefore, DDoS is a grand challenge to defeat. There are numerous mechanism to defend DDoS, however, this paper surveys the deployment of Bloom Filter in defending a DDoS attack. The Bloom Filter is a probabilistic data structure for membership query that returns either true or false. Bloom Filter uses tiny memory to store information of large data. Therefore, packet information is stored in Bloom Filter to defend and defeat DDoS. This paper presents a survey on DDoS defending technique using Bloom Filter.Comment: 9 pages, 1 figure. This article is accepted for publication in EAI Endorsed Transactions on Scalable Information System

    Predicting Global Internet Instability Caused by Worms using Neural Networks

    Get PDF
    Student Number : 9607275H - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built EnvironmentInternet worms are capable of quickly propagating by exploiting vulnerabilities of hosts that have access to the Internet. Once a computer has been infected, the worms have access to sensitive information on the computer, and are able to corrupt or retransmit this information. This dissertation describes a method of predicting Internet instability due to the presence of a worm on the Internet, using data currently available from global Internet routers. The work is based on previous research which has indicated a link between the increase in the number of Border Gateway Protocol (BGP) routing messages and global Internet instability. The type of system used to provide the prediction is known as an autoencoder. This is a specialised type of neural network, which is able to provide a degree of novelty for inputs. The autoencoder is trained to recognise “normal” data, and therefore provides a high novelty output for inputs dissimilar to the normal data. The BGP Update routing messages sent between routers were used as the only inputs to the autoencoder. These intra-router messages provide route availability information, and inform neighbouring routers of any route changes. The outputs from the network were shown to help provide an early warning mechanism for the presence of a worm. An alternative method for detecting instability is a rule-based system, which generates alarms if the number of certain BGP routing messages exceeds a prespecified threshold. This project compared the autoencoder to a simple rule-based system. The results showed that the autoencoder provided a better prediction and was less complex for a network administrator to configure. Although the correlation between the number of BGP Updates and global Internet instability has been shown previously, this work presents the first known application of a neural network to predict the instability using this correlation. A system based on this strategy has the potential to reduce the damage done by a worm’s propagation and payload, by providing an automated means of detection that is faster than that of a human

    Intrusion Detection and Security Assessment in a University Network

    Get PDF
    This thesis first explores how intrusion detection (ID) techniques can be used to provide an extra security layer for today‟s typically security-unaware Internet user. A review of the ever-growing network security threat is presented along with an analysis of the suitability of existing ID systems (IDS) for protecting users of varying security expertise. In light of the impracticality of many IDS for today‟s users, a web-enabled, agent-based, hybrid IDS is proposed. The motivations for the system are presented along with details of its design and implementation. As a test case, the system is deployed on the DCU network and results analysed. One of the aims of an IDS is to uncover security-related issues in its host network. The issues revealed by our IDS demonstrate that a full DCU network security assessment is warranted. This thesis describes how such an assessment should be carried out and presents corresponding results. A set of security-enhancing recommendations for the DCU network are presented

    A Brave New World: Studies on the Deployment and Security of the Emerging IPv6 Internet.

    Full text link
    Recent IPv4 address exhaustion events are ushering in a new era of rapid transition to the next generation Internet protocol---IPv6. Via Internet-scale experiments and data analysis, this dissertation characterizes the adoption and security of the emerging IPv6 network. The work includes three studies, each the largest of its kind, examining various facets of the new network protocol's deployment, routing maturity, and security. The first study provides an analysis of ten years of IPv6 deployment data, including quantifying twelve metrics across ten global-scale datasets, and affording a holistic understanding of the state and recent progress of the IPv6 transition. Based on cross-dataset analysis of relative global adoption rates and across features of the protocol, we find evidence of a marked shift in the pace and nature of adoption in recent years and observe that higher-level metrics of adoption lag lower-level metrics. Next, a network telescope study covering the IPv6 address space of the majority of allocated networks provides insight into the early state of IPv6 routing. Our analyses suggest that routing of average IPv6 prefixes is less stable than that of IPv4. This instability is responsible for the majority of the captured misdirected IPv6 traffic. Observed dark (unallocated destination) IPv6 traffic shows substantial differences from the unwanted traffic seen in IPv4---in both character and scale. Finally, a third study examines the state of IPv6 network security policy. We tested a sample of 25 thousand routers and 520 thousand servers against sets of TCP and UDP ports commonly targeted by attackers. We found systemic discrepancies between intended security policy---as codified in IPv4---and deployed IPv6 policy. Such lapses in ensuring that the IPv6 network is properly managed and secured are leaving thousands of important devices more vulnerable to attack than before IPv6 was enabled. Taken together, findings from our three studies suggest that IPv6 has reached a level and pace of adoption, and shows patterns of use, that indicates serious production employment of the protocol on a broad scale. However, weaker IPv6 routing and security are evident, and these are leaving early dual-stack networks less robust than the IPv4 networks they augment.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120689/1/jczyz_1.pd

    An exploration of the overlap between open source threat intelligence and active internet background radiation

    Get PDF
    Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP

    Exploiting Host Availability in Distributed Systems.

    Full text link
    As distributed systems become more decentralized, fluctuating host availability is an increasingly disruptive phenomenon. Older systems such as AFS used a small number of well-maintained, highly available machines to coordinate access to shared client state; server uptime (and thus service availability) were expected to be high. Newer services scale to larger number of clients by increasing the number of servers. In these systems, the responsibility for maintaining the service abstraction is spread amongst thousands of machines. In the extreme, each client is also a server who must respond to requests from its peers, and each host can opt in or out of the system at any time. In these operating environments, a non-trivial fraction of servers will be unavailable at any give time. This diffusion of responsibility from a few dedicated hosts to many unreliable ones has a dramatic impact on distributed system design, since it is difficult to build robust applications atop a partially available, potentially untrusted substrate. This dissertation explores one aspect of this challenge: how can a distributed system measure the fluctuating availability of its constituent hosts, and how can it use an understanding of this churn to improve performance and security? This dissertation extends the previous literature in three ways. First, it introduces new analytical techniques for characterizing availability data, applying these techniques to several real networks and explaining the distinct uptime patterns found within. Second, this dissertation introduces new methods for predicting future availability, both at the granularity of individual hosts and clusters of hosts. Third, my dissertation describes how to use these new techniques to improve the performance and security of distributed systems.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58445/1/jmickens_1.pd
    corecore