10 research outputs found

    An Innovative Signature Detection System for Polymorphic and Monomorphic Internet Worms Detection and Containment

    Get PDF
    Most current anti-worm systems and intrusion-detection systems use signature-based technology instead of anomaly-based technology. Signature-based technology can only detect known attacks with identified signatures. Existing anti-worm systems cannot detect unknown Internet scanning worms automatically because these systems do not depend upon worm behaviour but upon the worm’s signature. Most detection algorithms used in current detection systems target only monomorphic worm payloads and offer no defence against polymorphic worms, which changes the payload dynamically. Anomaly detection systems can detect unknown worms but usually suffer from a high false alarm rate. Detecting unknown worms is challenging, and the worm defence must be automated because worms spread quickly and can flood the Internet in a short time. This research proposes an accurate, robust and fast technique to detect and contain Internet worms (monomorphic and polymorphic). The detection technique uses specific failure connection statuses on specific protocols such as UDP, TCP, ICMP, TCP slow scanning and stealth scanning as characteristics of the worms. Whereas the containment utilizes flags and labels of the segment header and the source and destination ports to generate the traffic signature of the worms. Experiments using eight different worms (monomorphic and polymorphic) in a testbed environment were conducted to verify the performance of the proposed technique. The experiment results showed that the proposed technique could detect stealth scanning up to 30 times faster than the technique proposed by another researcher and had no false-positive alarms for all scanning detection cases. The experiments showed the proposed technique was capable of containing the worm because of the traffic signature’s uniqueness

    Activity Monitoring for large honeynets and network telescopes

    Get PDF
    International audienceThis paper proposes a new distributed monitoring approach based on the notion of centrality of a graph and its evolution in time. We consider an activity profiling method for a distributed monitoring platform and illustrate its usage in two different target deployments. The first one concerns the monitoring of a distributed honeynet, while the second deployment target is the monitoring of a large network telescope. The central concept underlying our work are the intersection graphs and a centrality based locality statistics. These graphs have not been used widely in the field of network security. The advantage of this method is that analyzing aggregated activity data is possible by considering the curve of the maximum locality statistics and that important change point moments are well identified

    An exploratory study of techniques in passive network telescope data analysis

    Get PDF
    Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope dataset

    Defining and evaluating greynets (sparse darknets)

    No full text
    Darknets are increasingly being proposed as a means by which network administrators can monitor for anomalous, externally sourced traffic. Current darknet designs require large, contiguous blocks of unused IP addresses - not always feasible for enterprise network operators. In this paper we introduce, define and evaluate the concept of a Greynet - a region of IP address space that is sparsely populated with 'darknet' addresses interspersed with active (or 'lit') IP addresses. We use raw traffic traces collected within a university network to evaluate how sparseness affects a greynet's effectiveness and hence show that enterprise operators can achieve useful levels of network scan detection, with only small numbers of 'dark' IP addresses making up their greynets

    An exploratory study of techniques in passive network telescope data analysis

    Get PDF
    Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope dataset

    Descoberta de conhecimento em Logs de tentativas de intrusão. Um estudo de caso em Instituições de Ensino Superior

    Get PDF
    Perante a evolução constante da Internet, a sua utilização é quase obrigatória. Através da web, é possível conferir extractos bancários, fazer compras em países longínquos, pagar serviços sem sair de casa, entre muitos outros. Há inúmeras alternativas de utilização desta rede. Ao se tornar tão útil e próxima das pessoas, estas começaram também a ganhar mais conhecimentos informáticos. Na Internet, estão também publicados vários guias para intrusão ilícita em sistemas, assim como manuais para outras práticas criminosas. Este tipo de informação, aliado à crescente capacidade informática do utilizador, teve como resultado uma alteração nos paradigmas de segurança informática actual. Actualmente, em segurança informática a preocupação com o hardware é menor, sendo o principal objectivo a salvaguarda dos dados e continuidade dos serviços. Isto deve-se fundamentalmente à dependência das organizações nos seus dados digitais e, cada vez mais, dos serviços que disponibilizam online. Dada a mudança dos perigos e do que se pretende proteger, também os mecanismos de segurança devem ser alterados. Torna-se necessário conhecer o atacante, podendo prever o que o motiva e o que pretende atacar. Neste contexto, propôs-se a implementação de sistemas de registo de tentativas de acesso ilícitas em cinco instituições de ensino superior e posterior análise da informação recolhida com auxílio de técnicas de data mining (mineração de dados). Esta solução é pouco utilizada com este intuito em investigação, pelo que foi necessário procurar analogias com outras áreas de aplicação para recolher documentação relevante para a sua implementação. A solução resultante revelou-se eficaz, tendo levado ao desenvolvimento de uma aplicação de fusão de logs das aplicações Honeyd e Snort (responsável também pelo seu tratamento, preparação e disponibilização num ficheiro Comma Separated Values (CSV), acrescentando conhecimento sobre o que se pode obter estatisticamente e revelando características úteis e previamente desconhecidas dos atacantes. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurança, tais como firewalls e Intrusion Detection Systems (IDS).Internet’s utilization is becoming more and more common. It’s almost mandatory. Through the web it’s possible to check bank statements, buy products from different countries, pay service bills, just to name a few. In spite of this usability and closeness to people, Internet users started to have more computer science knowlegde. There are also manuals and guides giving detailed instructions about how to break in systems, among other criminal activities. All these facts, together with the growing user knowlege, changed today’s systems and network security paradigms. Nowadays, computer security in a company is closely related to the protection of data and service availability other than computer integrity like in the old days. This is fundamentally due to the growing dependency of organizations on their digital data and online services. In order to follow the changing needs of security we must also change the security mecanisms. It’s becoming imperative to know the intruder’s motivation and goal. With this in mind, the implementation of an intrusion logging system in five colleges was proposed, as well as its data analysis and interpretation using data mining techniques, although the combination of these concepts isn´t a common goal. In spite of this singularity, it was necessary to find analogue objectives in order to enable the gathering of relevant information to the implementation of the solution. The achieved solution was considered to be effective as it was able to increase the previous knowledge obtained through statistical observation, revealing some of the attackers’ useful characteristics. This new knowledge can be implemented by system administrators in their security mechanisms, like firewalls and Intrusion Detection Systems (IDS). An application that combines the logs of the software Honeyd and Snort was also developed. This new application is capable of cleaning and preparing information as well as making it available in a standard Comma Separated Values (CSV) format so it can be used by other applications. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurança, tais como firewalls e Intrusion Detection Systems (IDS)

    A Brave New World: Studies on the Deployment and Security of the Emerging IPv6 Internet.

    Full text link
    Recent IPv4 address exhaustion events are ushering in a new era of rapid transition to the next generation Internet protocol---IPv6. Via Internet-scale experiments and data analysis, this dissertation characterizes the adoption and security of the emerging IPv6 network. The work includes three studies, each the largest of its kind, examining various facets of the new network protocol's deployment, routing maturity, and security. The first study provides an analysis of ten years of IPv6 deployment data, including quantifying twelve metrics across ten global-scale datasets, and affording a holistic understanding of the state and recent progress of the IPv6 transition. Based on cross-dataset analysis of relative global adoption rates and across features of the protocol, we find evidence of a marked shift in the pace and nature of adoption in recent years and observe that higher-level metrics of adoption lag lower-level metrics. Next, a network telescope study covering the IPv6 address space of the majority of allocated networks provides insight into the early state of IPv6 routing. Our analyses suggest that routing of average IPv6 prefixes is less stable than that of IPv4. This instability is responsible for the majority of the captured misdirected IPv6 traffic. Observed dark (unallocated destination) IPv6 traffic shows substantial differences from the unwanted traffic seen in IPv4---in both character and scale. Finally, a third study examines the state of IPv6 network security policy. We tested a sample of 25 thousand routers and 520 thousand servers against sets of TCP and UDP ports commonly targeted by attackers. We found systemic discrepancies between intended security policy---as codified in IPv4---and deployed IPv6 policy. Such lapses in ensuring that the IPv6 network is properly managed and secured are leaving thousands of important devices more vulnerable to attack than before IPv6 was enabled. Taken together, findings from our three studies suggest that IPv6 has reached a level and pace of adoption, and shows patterns of use, that indicates serious production employment of the protocol on a broad scale. However, weaker IPv6 routing and security are evident, and these are leaving early dual-stack networks less robust than the IPv4 networks they augment.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120689/1/jczyz_1.pd

    A framework for the application of network telescope sensors in a global IP network

    Get PDF
    The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security system

    Bolvedere: a scalable network flow threat analysis system

    Get PDF
    Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system
    corecore