767 research outputs found

    An Analysis of Botnet Attack for SMTP Server using Software Define Network (SDN)

    Get PDF
    SDN architecture overwhelms traditional network architectures by software abstraction for a centralize control of the entire networks. It provides manageable network infrastructures that consist millions of computing devices and software. In this work, we present multi-domain SDNs architecture with an integration of Spamhaus server. The proposed method allows SDN Controllers to update the Spamhaus server with latest detected spam signatures. It can help to prevent any spam email from entering others SDN domains. We also discussed a method for analyzing SMTP spam frames using a decision tree algorithm. We use Mininet tool to simulate the multi-domain SDNs with the Spamhaus server. The simulation results show that a packet Retransmission Timeout (RTO) between server and client can help to detect the SMTP spam frames

    Inferring malicious network events in commercial ISP networks using traffic summarisation

    Get PDF
    With the recent increases in bandwidth available to home users, traffic rates for commercial national networks have also been increasing rapidly. This presents a problem for any network monitoring tool as the traffic rate they are expected to monitor is rising on a monthly basis. Security within these networks is para- mount as they are now an accepted home of trade and commerce. Core networks have been demonstrably and repeatedly open to attack; these events have had significant material costs to high profile targets. Network monitoring is an important part of network security, providing in- formation about potential security breaches and in understanding their impact. Monitoring at high data rates is a significant problem; both in terms of processing the information at line rates, and in terms of presenting the relevant information to the appropriate persons or systems. This thesis suggests that the use of summary statistics, gathered over a num- ber of packets, is a sensible and effective way of coping with high data rates. A methodology for discovering which metrics are appropriate for classifying signi- ficant network events using statistical summaries is presented. It is shown that the statistical measures found with this methodology can be used effectively as a metric for defining periods of significant anomaly, and further classifying these anomalies as legitimate or otherwise. In a laboratory environment, these metrics were used to detect DoS traffic representing as little as 0.1% of the overall network traffic. The metrics discovered were then analysed to demonstrate that they are ap- propriate and rational metrics for the detection of network level anomalies. These metrics were shown to have distinctive characteristics during DoS by the analysis of live network observations taken during DoS events. This work was implemented and operated within a live system, at multiple sites within the core of a commercial ISP network. The statistical summaries are generated at city based points of presence and gathered centrally to allow for spacial and topological correlation of security events. The architecture chosen was shown to be exible in its application. The system was used to detect the level of VoIP traffic present on the network through the implementation of packet size distribution analysis in a multi-gigabit environment. It was also used to detect unsolicited SMTP generators injecting messages into the core. ii Monitoring in a commercial network environment is subject to data protec- tion legislation. Accordingly the system presented processed only network and transport layer headers, all other data being discarded at the capture interface. The system described in this thesis was operational for a period of 6 months, during which a set of over 140 network anomalies, both malicious and benign were observed over a range of localities. The system design, example anomalies and metric analysis form the majority of this thesis

    Analysis of e-mail attachment signatures for potential use by intrusion detection systems

    Get PDF
    Today, an Intrusion Detection System (IDS) is almost a necessity. The effectiveness of an IDS depends on the number of parameters it can monitor to report malicious activity. Current Intrusion Detection Systems monitor packet headers only.;This thesis investigates the possibility of monitoring network packet data as one of the parameters for IDS. This is done by finding a pattern in each type of payload. This pattern might then be related to the application to which it belongs. Based on this pattern, an attempt is made to determine if there is a difference in packets generated by different applications.;This investigation limits the classification to packets generated by E-mail attachments. Frequency of characters in packet data is used to generate a pattern. This frequency is limited to Base64 alphabets. Based on these patterns, certain E-mail attachments can be related to the source type of the attached file

    Who Watches the Watchers: A Multi-Task Benchmark for Anomaly Detection

    Get PDF
    A driver in the rise of IoT systems has been the relative ease with which it is possible to create specialized-but- adaptable deployments from cost-effective components. Such components tend to be relatively unreliable and resource poor, but are increasingly widely connected. As a result, IoT systems are subject both to component failures and to the attacks that are an inevitable consequence of wide-area connectivity. Anomaly detection systems are therefore a cornerstone of effective operation; however, in the literature, there is no established common basis for the evaluation of anomaly detection systems for these environments. No common set of benchmarks or metrics exists and authors typically provide results for just one scenario. This is profoundly unhelpful to designers of IoT systems, who need to make a choice about anomaly detection that takes into account both ease of deployment and likely detection performance in their context. To address this problem, we introduce Aftershoc k, a multi-task benchmark. We adapt and standardize an array of datasets from the public literature into anomaly detection-specific benchmarks. We then proceed to apply a diverse set of existing anomaly detection algorithms to our datasets, producing a set of performance baselines for future comparisons. Results are reported via a dedicated online platform located at https://aftershock. dev, allowing system designers to evaluate the general applicability and practical utility of various anomaly detection models. This approach of public evaluation against common criteria is inspired by the immensely useful community resources found in areas such as natural language processing, recommender systems, and reinforcement learning. We collect, adapt, and make available 10 anomaly detection tasks which we use to evaluate 6 state-of-the-art solutions as well as common baselines. We offer researchers a submission system to evaluate future solutions in a transparent manner and we are actively engaging with academic and industry partners to expand the set of available tasks. Moreover, we are exploring options to add hardware-in-the-loop. As a community contribution, we invite researchers to train their own models (or those reported by others) on the public development datasets available on the online platform, submitting them for independent evaluation and reporting results against others
    • …
    corecore