165 research outputs found

    Border Gateway Protocol Anomaly Detection Using Machine Learning Techniques

    Get PDF
    As the primary protocol used to exchange routing information between network domains, Border Gateway Protocol (BGP) plays a central role in the functioning of the Internet. Border Gateway Protocol is a standardized router protocol used to initiate and maintain communication between domains, or autonomous systems, on the Internet. This protocol can exhibit anomalous behavior caused by improper provisioning, malicious attacks, traffic or equipment failure, and network operator error. At large internet service providers, many BGP issues are not immediately seen or explicitly monitored by network operations centers. This possible blind spot is due to the enormous number of BGP handshakes that occur throughout the network along with the fact that there are many of these sub-interfaces associated to a single physical connection. We will present machine learning methods for anomaly detection using unsupervised learning techniques and create a data pipeline to quickly collect and trigger on these anomalies when they occur. Clustering techniques including k-means and DBSCAN were successfully implemented and able to detect known anomalies for historical events. This approach could incur soft savings by triggering early detection warnings of anomalous BGP events, but human intervention may still be required in order to address possible false positives

    Novel graph analytics for enhancing data insight

    No full text
    Graph analytics is a fast growing and significant field in the visualization and data mining community, which is applied on numerous high-impact applications such as, network security, finance, and health care, providing users with adequate knowledge across various patterns within a given system. Although a series of methods have been developed in the past years for the analysis of unstructured collections of multi-dimensional points, graph analytics has only recently been explored. Despite the significant progress that has been achieved recently, there are still many open issues in the area, concerning not only the performance of the graph mining algorithms, but also producing effective graph visualizations in order to enhance human perception. The current thesis deals with the investigation of novel methods for graph analytics, in order to enhance data insight. Towards this direction, the current thesis proposes two methods so as to perform graph mining and visualization. Based on previous works related to graph mining, the current thesis suggests a set of novel graph features that are particularly efficient in identifying the behavioral patterns of the nodes on the graph. The specific features proposed, are able to capture the interaction of the neighborhoods with other nodes on the graph. Moreover, unlike previous approaches, the graph features introduced herein, include information from multiple node neighborhood sizes, thus capture long-range correlations between the nodes, and are able to depict the behavioral aspects of each node with high accuracy. Experimental evaluation on multiple datasets, shows that the use of the proposed graph features for the graph mining procedure, provides better results than the use of other state-of-the-art graph features. Thereafter, the focus is laid on the improvement of graph visualization methods towards enhanced human insight. In order to achieve this, the current thesis uses non-linear deformations so as to reduce visual clutter. Non-linear deformations have been previously used to magnify significant/cluttered regions in data or images for reducing clutter and enhancing the perception of patterns. Extending previous approaches, this work introduces a hierarchical approach for non-linear deformation that aims to reduce visual clutter by magnifying significant regions, and leading to enhanced visualizations of one/two/three-dimensional datasets. In this context, an energy function is utilized, which aims to determine the optimal deformation for every local region in the data, taking the information from multiple single-layer significance maps into consideration. The problem is subsequently transformed into an optimization problem for the minimization of the energy function under specific spatial constraints. Extended experimental evaluation provides evidence that the proposed hierarchical approach for the generation of the significance map surpasses current methods, and manages to effectively identify significant regions and deliver better results. The thesis is concluded with a discussion outlining the major achievements of the current work, as well as some possible drawbacks and other open issues of the proposed approaches that could be addressed in future works.Open Acces

    ROVER: a DNS-based method to detect and prevent IP hijacks

    Get PDF
    2013 Fall.Includes bibliographical references.The Border Gateway Protocol (BGP) is critical to the global internet infrastructure. Unfortunately BGP routing was designed with limited regard for security. As a result, IP route hijacking has been observed for more than 16 years. Well known incidents include a 2008 hijack of YouTube, loss of connectivity for Australia in February 2012, and an event that partially crippled Google in November 2012. Concern has been escalating as critical national infrastructure is reliant on a secure foundation for the Internet. Disruptions to military, banking, utilities, industry, and commerce can be catastrophic. In this dissertation we propose ROVER (Route Origin VERification System), a novel and practical solution for detecting and preventing origin and sub-prefix hijacks. ROVER exploits the reverse DNS for storing route origin data and provides a fail-safe, best effort approach to authentication. This approach can be used with a variety of operational models including fully dynamic in-line BGP filtering, periodically updated authenticated route filters, and real-time notifications for network operators. Our thesis is that ROVER systems can be deployed by a small number of institutions in an incremental fashion and still effectively thwart origin and sub-prefix IP hijacking despite non-participation by the majority of Autonomous System owners. We then present research results supporting this statement. We evaluate the effectiveness of ROVER using simulations on an Internet scale topology as well as with tests on real operational systems. Analyses include a study of IP hijack propagation patterns, effectiveness of various deployment models, critical mass requirements, and an examination of ROVER resilience and scalability

    A signal analysis of network traffic anomalies

    Get PDF

    A Logically Centralized Approach for Control and Management of Large Computer Networks

    Get PDF
    Management of large enterprise and Internet Service Provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these limitations, the networking research community has been pursuing the vision of simplifying the functional role of a router to its primary task of packet forwarding. This enables centralizing network control at a decision plane where network-wide state can be maintained, and network control can be centrally and consistently enforced. However, scalability and fault-tolerance concerns with physical centralization motivate the need for a more flexible and customizable approach. This dissertation is an attempt at bridging the gap between the extremes of distribution and centralization of network control. We present a logically centralized approach for the design of network decision plane that can be realized by using a set of physically distributed controllers in a network. This approach is aimed at giving network designers the ability to customize the level of control and management centralization according to the scalability, fault-tolerance, and responsiveness requirements of their networks. Our thesis is that logical centralization provides a robust, reliable, and efficient paradigm for management of large networks and we present several contributions to prove this thesis. For network planning, we describe techniques for optimizing the placement of network controllers and provide guidance on the physical design of logically centralized networks. For network operation, algorithms for maintaining dynamic associations between the decision plane and network devices are presented, along with a protocol that allows a set of network controllers to coordinate their decisions, and present a unified interface to the managed network devices. Furthermore, we study the trade-offs in decision plane application design and provide guidance on application state and logic distribution. Finally, we present results of extensive numerical and simulative analysis of the feasibility and performance of our approach. The results show that logical centralization can provide better scalability and fault-tolerance while maintaining performance similarity with traditional distributed approach

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    Implementing Soak Testing for an Access Network Solution

    Get PDF
    Tietoliikennelaitteiden ohjelmistojen toiminnalle asetetaan erittäin kovat laatuvaatimukset. Operaattoreilla on yleensä asiakkaiden kanssa SLA sopimukset, joiden rikkomisesta operaattorit saattavat joutua maksamaan suuriakin korvauksia. Lisäksi jokainen hetki, jolloin laite ei ole toimintavalmis, tuottaa operaattorille kustannuksia menetettyjen tulojen muodossa. Tämän vuoksi on erittäin tärkeää, että laitteet ovat jatkuvasti toimintakunnossa eikä palvelukatkoksia tule. Tämän diplomityön tavoitteena oli kehittää automatisoitu pitkän ajan testausjärjestelmä IP/MPLS pohjaiselle Tellabs 8600 reititinperheelle. Testattava järjestelmä koostuu useista verkkoelementeistä sekä graafisesta Tellabs 8000 verkonhallintajärjestelmästä. Tämän testausympäristön tavoitteena on paljastaa ongelmia, jotka eivät tule esiin normaalissa toiminnallisessa tai regressiotestauksessa vaan vaativat ilmaantuakseen pidempää ajoaikaa tai useita toistoja. Työssä kehitettiin kehys sille, kuinka testausympäristössä voidaan suorittaa automaattisesti erilaisia operaatioita sekä voidaan ohjelmallisesti havaita mahdollisia ongelmatilanteita. Testausjärjestelmä toteutettiin onnistuneesti ja täyttää sille asetetut tavoitteet. Testausjärjestelmä on otettu käyttöön Tellabsin systeemitestauksessa ja on käyttöönoton jälkeen osoittautunut hyödylliseksi ja tehokkaaksi järjestelmäksi. Systeemitestauksen käyttöön toteutettiin myös toinen täysin identtinen ympäristö.The quality requirements are extremely demanding for telecommunications software. Operators usually have SLA agreements with their customers, and violations to that contract may lead to serious compensations. Furthermore, every moment that equipment or some service is not operating correctly means lost income for the operator. For these reasons, it is extremely important for a telecommunications equipment to continue functioning properly without service affecting breaks. The purpose of this thesis was to design and implement automated soak testing for the IP/MPLS-based Tellabs 8600 router series. The system under test is composed of several network elements and a graphical Tellabs 8000 Network Management System. The purpose of this testing environment is to reveal defects that do not show up immediately in functional or regression testing but may manifest when the system is used for longer periods or operations are executed many times. A framework for automatically operating the test network and detecting problems programmatically was implemented in this thesis. The testing environment was successfully implemented and satisfies the objectives initially set for it. Testing environment has been taken into use in system testing at Tellabs and after deployment has turned out to be useful and effective. Another identical environment was also implemented for the system testing group

    Effective Wide-Area Network Performance Monitoring and Diagnosis from End Systems.

    Full text link
    The quality of all network application services running on today’s Internet heavily depends on the performance assurance offered by the Internet Service Providers (ISPs). Large network providers inside the core of the Internet are instrumental in determining the network properties of their transit services due to their wide-area coverage, especially in the presence of the increasingly deployed real-time sensitive network applications. The end-to-end performance of distributed applications and network services are susceptible to network disruptions in ISP networks. Given the scale and complexity of the Internet, failures and performance problems can occur in different ISP networks. It is important to efficiently identify and proactively respond to potential problems to prevent large damage. Existing work to monitor and diagnose network disruptions are ISP-centric, which relying on each ISP to set up monitors and diagnose within its network. This approach is limited as ISPs are unwilling to revealing such data to the public. My dissertation research developed a light-weight active monitoring system to monitor, diagnose and react to network disruptions by purely using end hosts, which can help customers assess the compliance of their service-level agreements (SLAs). This thesis studies research problems from three indispensable aspects: efficient monitoring, accurate diagnosis, and effective mitigation. This is an essential step towards accountability and fairness on the Internet. To fully understand the limitation of relying on ISP data, this thesis first studies and demonstrates the monitor selection’s great impact on the monitoring quality and the interpretation of the results. Motivated by the limitation of ISP-centric approach, this thesis demonstrates two techniques to diagnose two types of finegrained causes accurately and scalably by exploring information across routing and data planes, as well as sharing information among multiple locations collaboratively. Finally, we demonstrate usefulness of the monitoring and diagnosis results with two mitigation applications. The first application is short-term prevention of avoiding choosing the problematic route by exploring the predictability from history. The second application is to scalably compare multiple ISPs across four important performance metrics, namely reachability, loss rate, latency, and path diversity completely from end systems without any ISP cooperation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64770/1/wingying_1.pd
    corecore