234 research outputs found

    Detection of encrypted traffic generated by peer-to-peer live streaming applications using deep packet inspection

    Get PDF
    The number of applications using the peer-to-peer (P2P) networking paradigm and their popularity has substantially grown over the last decade. They evolved from the le-sharing applications to media streaming ones. Nowadays these applications commonly encrypt the communication contents or employ protocol obfuscation techniques. In this dissertation, it was conducted an investigation to identify encrypted traf c ows generated by three of the most popular P2P live streaming applications: TVUPlayer, Livestation and GoalBit. For this work, a test-bed that could simulate a near real scenario was created, and traf c was captured from a great variety of applications. The method proposed resort to Deep Packet Inspection (DPI), so we needed to analyse the payload of the packets in order to nd repeated patterns, that later were used to create a set of SNORT rules that can be used to detect key network packets generated by these applications. The method was evaluated experimentally on the test-bed created for that purpose, being shown that its accuracy is of 97% for GoalBit.A popularidade e o nĂșmero de aplicaçÔes que usam o paradigma de redes par-a-par (P2P) tĂȘm crescido substancialmente na Ășltima dĂ©cada. Estas aplicaçÔes deixaram de serem usadas simplesmente para partilha de ficheiros e sĂŁo agora usadas tambĂ©m para distribuir conteĂșdo multimĂ©dia. Hoje em dia, estas aplicaçÔes tĂȘm meios de cifrar o conteĂșdo da comunicação ou empregar tĂ©cnicas de ofuscação directamente no protocolo. Nesta dissertação, foi realizada uma investigação para identificar fluxos de trĂĄfego encriptados, que foram gerados por trĂȘs aplicaçÔes populares de distribuição de conteĂșdo multimĂ©dia em redes P2P: TVUPlayer, Livestation e GoalBit. Para este trabalho, foi criada uma plataforma de testes que pretendia simular um cenĂĄrio quase real, e o trĂĄfego que foi capturado, continha uma grande variedade de aplicaçÔes. O mĂ©todo proposto nesta dissertação recorre Ă  tĂ©cnica de Inspecção Profunda de Pacotes (DPI), e por isso, foi necessĂĄrio 21nalisar o conteĂșdo dos pacotes a fim de encontrar padrĂ”es que se repetissem, e que iriam mais tarde ser usados para criar um conjunto de regras SNORT para detecção de pacotes chave· na rede, gerados por estas aplicaçÔes, afim de se poder correctamente classificar os fluxos de trĂĄfego. ApĂłs descobrir que a aplicação Livestation deixou de funcionar com P2P, apenas as duas regras criadas atĂ© esse momento foram usadas. Quanto Ă  aplicação TVUPlayer, foram criadas vĂĄrias regras a partir do trĂĄfego gerado por ela mesma e que tiveram uma boa taxa de precisĂŁo. VĂĄrias regras foram tambĂ©m criadas para a aplicação GoalBit em que foram usados quatro cenĂĄrios: com e sem encriptação usando a opção de transmissĂŁo tracker, e com e sem encriptação usando a opção de transmissĂŁo sem necessidade de tracker (aqui foi usado o protocolo Kademlia). O mĂ©todo foi avaliado experimentalmente na plataforma de testes criada para o efeito, sendo demonstrado que a precisĂŁo do conjunto de regras para a aplicação GoallBit Ă© de 97%.Fundação para a CiĂȘncia e a Tecnologia (FCT

    Enhancing snort IDs performance using data mining

    Get PDF
    Intrusion detection systems (IDSs) such as Snort apply deep packet inspection to detect intrusions. Usually, these are rule-based systems, where each incoming packet is matched with a set of rules. Each rule consists of two parts: the rule header and the rule options. The rule header is compared with the packet header. The rule options usually contain a signature string that is matched with packet content using an efficient string matching algorithm. The traditional approach to IDS packet inspection checks a packet against the detection rules by scanning from the first rule in the set and continuing to scan all the rules until a match is found. This approach becomes inefficient if the number of rules is too large and if the majority of the packets match with rules located at the end of the rule set. In this thesis, we propose an intelligent predictive technique for packet inspection based on data mining. We consider each rule in a rule set as a ‘class’. A classifier is first trained with labeled training data. Each such labeled data point contains packet header information, packet content summary information, and the corresponding class label (i.e. the rule number with which the packet matches). Then the classifier is used to classify new incoming packets. The predicted class, i.e. rule, is checked against the packet to see if this packet really matches the predicted rule. If it does, the corresponding action (i.e. alert) of the rule is taken. Otherwise, if the prediction of the classifier is wrong, we go back to the traditional way of matching rules. The advantage of this intelligent predictive packet matching is that it offers much faster rule matching. We have proved, both analytically and empirically, that even with millions of real network traffic packets and hundreds of rules, the classifier can achieve very high accuracy, thereby making the IDS several times faster in making matching decisions

    Duo: Software Defined Intrusion Tolerant System Using Dual Cluster

    Get PDF
    An intrusion tolerant system (ITS) is a network security system that is composed of redundant virtual servers that are online only in a short time window, called exposure time. The servers are periodically recovered to their clean state, and any infected servers are refreshed again, so attackers have insufficient time to succeed in breaking into the servers. However, there is a conflicting interest in determining exposure time, short for security and long for performance. In other words, the short exposure time can increase security but requires more servers to run in order to process requests in a timely manner. In this paper, we propose Duo, an ITS incorporated in SDN, which can reduce exposure time without consuming computing resources. In Duo, there are two types of servers: some servers with long exposure time (White server) and others with short exposure time (Gray server). Then, Duo classifies traffic into benign and suspicious with the help of SDN/NFV technology that also allows dynamically forwarding the classified traffic to White and Gray servers, respectively, based on the classification result. By reducing exposure time of a set of servers, Duo can decrease exposure time on average. We have implemented the prototype of Duo and evaluated its performance in a realistic environment

    Classification hardness for supervised learners on 20 years of intrusion detection data

    Get PDF
    This article consolidates analysis of established (NSL-KDD) and new intrusion detection datasets (ISCXIDS2012, CICIDS2017, CICIDS2018) through the use of supervised machine learning (ML) algorithms. The uniformity in analysis procedure opens up the option to compare the obtained results. It also provides a stronger foundation for the conclusions about the efficacy of supervised learners on the main classification task in network security. This research is motivated in part to address the lack of adoption of these modern datasets. Starting with a broad scope that includes classification by algorithms from different families on both established and new datasets has been done to expand the existing foundation and reveal the most opportune avenues for further inquiry. After obtaining baseline results, the classification task was increased in difficulty, by reducing the available data to learn from, both horizontally and vertically. The data reduction has been included as a stress-test to verify if the very high baseline results hold up under increasingly harsh constraints. Ultimately, this work contains the most comprehensive set of results on the topic of intrusion detection through supervised machine learning. Researchers working on algorithmic improvements can compare their results to this collection, knowing that all results reported here were gathered through a uniform framework. This work's main contributions are the outstanding classification results on the current state of the art datasets for intrusion detection and the conclusion that these methods show remarkable resilience in classification performance even when aggressively reducing the amount of data to learn from

    Energy Efficient Hardware Accelerators for Packet Classification and String Matching

    Get PDF
    This thesis focuses on the design of new algorithms and energy efficient high throughput hardware accelerators that implement packet classification and fixed string matching. These computationally heavy and memory intensive tasks are used by networking equipment to inspect all packets at wire speed. The constant growth in Internet usage has made them increasingly difficult to implement at core network line speeds. Packet classification is used to sort packets into different flows by comparing their headers to a list of rules. A flow is used to decide a packet’s priority and the manner in which it is processed. Fixed string matching is used to inspect a packet’s payload to check if it contains any strings associated with known viruses, attacks or other harmful activities. The contributions of this thesis towards the area of packet classification are hardware accelerators that allow packet classification to be implemented at core network line speeds when classifying packets using rulesets containing tens of thousands of rules. The hardware accelerators use modified versions of the HyperCuts packet classification algorithm. An adaptive clocking unit is also presented that dynamically adjusts the clock speed of a packet classification hardware accelerator so that its processing capacity matches the processing needs of the network traffic. This keeps dynamic power consumption to a minimum. Contributions made towards the area of fixed string matching include a new algorithm that builds a state machine that is used to search for strings with the aid of default transition pointers. The use of default transition pointers keep memory consumption low, allowing state machines capable of searching for thousands of strings to be small enough to fit in the on-chip memory of devices such as FPGAs. A hardware accelerator is also presented that uses these state machines to search through the payloads of packets for strings at core network line speeds

    A Review on Features’ Robustness in High Diversity Mobile Traffic Classifications

    Get PDF
    Mobile traffics are becoming more dominant due to growing usage of mobile devices and proliferation of IoT. The influx of mobile traffics introduce some new challenges in traffic classifications; namely the diversity complexity and behavioral dynamism complexity. Existing traffic classifications methods are designed for classifying standard protocols and user applications with more deterministic behaviors in small diversity. Currently, flow statistics, payload signature and heuristic traffic attributes are some of the most effective features used to discriminate traffic classes. In this paper, we investigate the correlations of these features to the less-deterministic user application traffic classes based on corresponding classification accuracy. Then, we evaluate the impact of large-scale classification on feature's robustness based on sign of diminishing accuracy. Our experimental results consolidate the needs for unsupervised feature learning to address the dynamism of mobile application behavioral traits for accurate classification on rapidly growing mobile traffics

    MAGMA network behavior classifier for malware traffic

    Get PDF
    Malware is a major threat to security and privacy of network users. A large variety of malware is typically spread over the Internet, hiding in benign traffic. New types of malware appear every day, challenging both the research community and security companies to improve malware identification techniques. In this paper we present MAGMA, MultilAyer Graphs for MAlware detection, a novel malware behavioral classifier. Our system is based on a Big Data methodology, driven by real-world data obtained from traffic traces collected in an operational network. The methodology we propose automatically extracts patterns related to a specific input event, i.e., a seed, from the enormous amount of events the network carries. By correlating such activities over (i) time, (ii) space, and (iii) network protocols, we build a Network Connectivity Graph that captures the overall “network behavior” of the seed. We next extract features from the Connectivity Graph and design a supervised classifier. We run MAGMA on a large dataset collected from a commercial Internet Provider where 20,000 Internet users generated more than 330 million events. Only 42,000 are flagged as malicious by a commercial IDS, which we consider as an oracle. Using this dataset, we experimentally evaluate MAGMA accuracy and robustness to parameter settings. Results indicate that MAGMA reaches 95% accuracy, with limited false positives. Furthermore, MAGMA proves able to identify suspicious network events that the IDS ignored

    Discovery of Malicious Attacks to Improve Mobile Collaborative Learning (MCL)

    Get PDF
    Mobile collaborative learning (MCL) is highly acknowledged and focusing paradigm in eductional institutions and several organizations across the world. It exhibits intellectual synergy of various combined minds to handle the problem and stimulate the social activity of mutual understanding. To improve and foster the baseline of MCL, several supporting architectures, frameworks including number of the mobile applications have been introduced. Limited research was reported that particularly focuses to enhance the security of those pardigms and provide secure MCL to users. The paper handles the issue of rogue DHCP server that affects and disrupts the network resources during the MCL. The rogue DHCP is unauthorized server that releases the incorrect IP address to users and sniffs the traffic illegally. The contribution specially provides the privacy to users and enhances the security aspects of mobile supported collaborative framework (MSCF). The paper introduces multi-frame signature-cum anomaly-based intrusion detection systems (MSAIDS) supported with novel algorithms through addition of new rules in IDS and mathematcal model. The major target of contribution is to detect the malicious attacks and blocks the illegal activities of rogue DHCP server. This innovative security mechanism reinforces the confidence of users, protects network from illicit intervention and restore the privacy of users. Finally, the paper validates the idea through simulation and compares the findings with other existing techniques.Comment: 20 pages and 11 figures; International Journal of Computer Networks and Communications (IJCNC) July 2012, Volume 4. Number

    Anomaly-based Correlation of IDS Alarms

    Get PDF
    An Intrusion Detection System (IDS) is one of the major techniques for securing information systems and keeping pace with current and potential threats and vulnerabilities in computing systems. It is an indisputable fact that the art of detecting intrusions is still far from perfect, and IDSs tend to generate a large number of false IDS alarms. Hence human has to inevitably validate those alarms before any action can be taken. As IT infrastructure become larger and more complicated, the number of alarms that need to be reviewed can escalate rapidly, making this task very difficult to manage. The need for an automated correlation and reduction system is therefore very much evident. In addition, alarm correlation is valuable in providing the operators with a more condensed view of potential security issues within the network infrastructure. The thesis embraces a comprehensive evaluation of the problem of false alarms and a proposal for an automated alarm correlation system. A critical analysis of existing alarm correlation systems is presented along with a description of the need for an enhanced correlation system. The study concludes that whilst a large number of works had been carried out in improving correlation techniques, none of them were perfect. They either required an extensive level of domain knowledge from the human experts to effectively run the system or were unable to provide high level information of the false alerts for future tuning. The overall objective of the research has therefore been to establish an alarm correlation framework and system which enables the administrator to effectively group alerts from the same attack instance and subsequently reduce the volume of false alarms without the need of domain knowledge. The achievement of this aim has comprised the proposal of an attribute-based approach, which is used as a foundation to systematically develop an unsupervised-based two-stage correlation technique. From this formation, a novel SOM K-Means Alarm Reduction Tool (SMART) architecture has been modelled as the framework from which time and attribute-based aggregation technique is offered. The thesis describes the design and features of the proposed architecture, focusing upon the key components forming the underlying architecture, the alert attributes and the way they are processed and applied to correlate alerts. The architecture is strengthened by the development of a statistical tool, which offers a mean to perform results or alert analysis and comparison. The main concepts of the novel architecture are validated through the implementation of a prototype system. A series of experiments were conducted to assess the effectiveness of SMART in reducing false alarms. This aimed to prove the viability of implementing the system in a practical environment and that the study has provided appropriate contribution to knowledge in this field
    • 

    corecore