18,099 research outputs found

    SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis

    Full text link
    In this paper, we propose a novel approach, called SENATUS, for joint traffic anomaly detection and root-cause analysis. Inspired from the concept of a senate, the key idea of the proposed approach is divided into three stages: election, voting and decision. At the election stage, a small number of \nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{, which are used} to represent approximately the total (usually huge) set of traffic flows. In the voting stage, anomaly detection is applied on the senator flows and the detected anomalies are correlated to identify the most possible anomalous time bins. Finally in the decision stage, a machine learning technique is applied to the senator flows of each anomalous time bin to find the root cause of the anomalies. We evaluate SENATUS using traffic traces collected from the Pan European network, GEANT, and compare against another approach which detects anomalies using lossless compression of traffic histograms. We show the effectiveness of SENATUS in diagnosing anomaly types: network scans and DoS/DDoS attacks

    Traffic Verification for Network Anomaly Detection in Sensor Networks

    Get PDF
    AbstractThe traffic that is being injected to the network is increasing every day. It can be either normal or anomalous. Anomalous traffic is variation in the communication pattern from the normal one and hence anomaly detection is an important procedure in ensuring network resiliency. Probabilistic models can be used to model traffic for anomaly detection. In this paper, we use Gaussian Mixture Model for traffic verification. The traffic is captured and is given to the model to verification. Traffic which obeys the model is normal and those which disobey are anomalies. Analysis shows that the proposed system has better performance in terms of delay, throughput and packet delivery rati

    Anomaly Detection in IoT: Methods, Techniques and Tools

    Get PDF
    [Abstract] Nowadays, the Internet of things (IoT) network, as system of interrelated computing devices with the ability to transfer data over a network, is present in many scenarios of everyday life. Understanding how traffic behaves can be done more easily if the real environment is replicated to a virtualized environment. In this paper, we propose a methodology to develop a systematic approach to dataset analysis for detecting traffic anomalies in an IoT network. The reader will become familiar with the specific techniques and tools that are used. The methodology will have five stages: definition of the scenario, injection of anomalous packages, dataset analysis, implementation of classification algorithms for anomaly detection and conclusions

    Anomaly Extraction Using Histogram-Based Detector

    Get PDF
    Now a day’s network traffic monitoring and performance of the network are more important aspect in the computer science. Anomaly Extraction is a method of detecting in large set of flow observed during an anomalous time interval, the flows associated with the one or more anomalous event. Anomaly extraction is important problem that essential for application ranging from root cause analysis and attack mitigation and anomaly extraction is also important problem for several application of testing anomaly detector. In this paper, use a meta-data provided by histogram detector for detect and identify the suspicious flow after successfully detection suspicious flow then applying the association rule mining for finding the anomalous flow. By using the rich traffic data from the meta-data of the histogram-based detector we can reduce the classification cost. In this paper, Anomaly extraction method reduce the working time which is required for analyzing alarm, its make system more practically. DOI: 10.17762/ijritcc2321-8169.15011

    Network forensic Log analysis

    Get PDF
    Network forensics log analysis is the capturing, recording, and analysis of network events in order to discover the source of security attacks. An investigator needs to back up these recorded data to free up recording media and to preserve the data for future analysis. An investigator needs to perform network forensics process to determine which type of an attack over a network and to trace out the culprit. In the cyber-crime world huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to keep on playing with time and to find out the clues and analyze those collected data. In network forensic analysis, it involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures. Network forensics is a branch of digital forensics that focuses on the monitoring and analysis of network traffic. Unlike other areas of digital forensics that focus on stored or static data, network forensics deals with volatile and dynamic data. It generally has two uses. The first, relating to security, involves detecting anomalous traffic and identifying intrusions. The second use, relating to law enforcement according to the chain of custody rule, involves capturing and Analyzing network traffic and can include tasks such as reassembling transferred files.“Stop, look and listen” systems, in which each packet is analysed in a rudimentary way in memory and only certain information saved for current analysis. On this analysis, we propose to archive data using various tools and provide a “unified structure” based on a standard forensic process. This different unified structured IDS data are use to store and preserve in a place, which would be use to present as an evidence in court by the forensic analysis. DOI: 10.17762/ijritcc2321-8169.15053

    Tensor Based Monitoring of Large-Scale Network Traffic

    Get PDF
    Network monitoring systems are important for network operators to easily analyze behavioral trends in flow data. As networks become larger and more complex, the data becomes more complex with increased size and more variables. This increase in dimensionality lends itself to tensor-based analysis of network data as tensors are arbitrarily sized multi-dimensional objects. Tensor-based network monitoring methods have been explored in recent years through work at Carnegie Mellon University through their algorithm DenseAlert. DenseAlert identifies events anomalous events in tensors through quick detection of dense sub-tensors in positive-valued tensors. However, from experimentation, DenseAlert fails on larger datasets. Drawing from RED Alert, we developed an algorithm called RED Alert that uses recursive filtering and expansion to handle anomaly detection in large tensors of positive and negative valued data. This is done through the use of network parameters that are structured in a hierarchical fashion. That is, network traffic is first modeled at low granular data (e.g. host country), and events detected as anomalous in lower spaces are tracked down to higher granular data (e.g. host IP). The tensors are built on-the-fly in streaming data, filtering data to only consider the parameters deemed anomalous in previous granularity levels. RED Alert is showcased on two network monitoring examples, packet loss detection and botnet detection, comparing results to DenseAlert. In both cases, RED Alert was able to detect suspicious events and identify the root cause of the behavior from a sole IP. RED Alert was developed as part of a greater project, InSight2, that provides several different network monitoring dashboards to aid network operators. This required additional development of a tensor library that worked in the context of InSight2 as well as the development of a dashboard that could run the algorithm and display the results in meaningful ways
    corecore