44 research outputs found

    Sequential Protocols’ Behaviour Analysis

    Get PDF
    The growing adoption of the Session Initiation Protocol (SIP) has motivated the development of tools capable of detecting valid SIP dialogues, in order to potentially identify behavioural traits of the protocol. This thesis serves as a starting point for characterising SIP dialogues, in terms of distinct signalling sequences, and providing a reliable classification of SIP sequences. We start by analysing sequential pattern mining algorithms in an off-line manner, providing valuable statistical information regarding the SIP sequences. In this analysis some classical Sequential Pattern Mining algorithms are evaluated, to gather insights on resource consumption and computation time. The results of the analysis lead to the identification of every possible combinations of a given SIP sequence in a fast manner. In the second stage of this work we study different stochastic tools to classify the SIP dialogues according to the observed SIP messages. Deviations to previously observed SIP dialogues are also identified. Some experimental results are presented, which adopt the Hidden Markov Model jointly used with the Viterbi algorithm to classify multiple SIP messages that are observed sequentially. The experimental tests include a stochastic dynamic evaluation, and the assessment of the stochastic similarity. The goal of these tests is to show the reliability and robustness of the algorithms adopted to classify the incoming SIP sequences, and thus characterizing the SIP dialogues

    IoT-MQTT based denial of service attack modelling and detection

    Get PDF
    Internet of Things (IoT) is poised to transform the quality of life and provide new business opportunities with its wide range of applications. However, the bene_ts of this emerging paradigm are coupled with serious cyber security issues. The lack of strong cyber security measures in protecting IoT systems can result in cyber attacks targeting all the layers of IoT architecture which includes the IoT devices, the IoT communication protocols and the services accessing the IoT data. Various IoT malware such as Mirai, BASHLITE and BrickBot show an already rising IoT device based attacks as well as the usage of infected IoT devices to launch other cyber attacks. However, as sustained IoT deployment and functionality are heavily reliant on the use of e_ective data communication protocols, the attacks on other layers of IoT architecture are anticipated to increase. In the IoT landscape, the publish/- subscribe based Message Queuing Telemetry Transport (MQTT) protocol is widely popular. Hence, cyber security threats against the MQTT protocol are projected to rise at par with its increasing use by IoT manufacturers. In particular, the Internet exposed MQTT brokers are vulnerable to protocolbased Application Layer Denial of Service (DoS) attacks, which have been known to cause wide spread service disruptions in legacy systems. In this thesis, we propose Application Layer based DoS attacks that target the authentication and authorisation mechanism of the the MQTT protocol. In addition, we also propose an MQTT protocol attack detection framework based on machine learning. Through extensive experiments, we demonstrate the impact of authentication and authorisation DoS attacks on three opensource MQTT brokers. Based on the proposed DoS attack scenarios, an IoT-MQTT attack dataset was generated to evaluate the e_ectiveness of the proposed framework to detect these malicious attacks. The DoS attack evaluation results obtained indicate that such attacks can overwhelm the MQTT brokers resources even when legitimate access to it was denied and resources were restricted. The evaluations also indicate that the proposed DoS attack scenarios can signi_cantly increase the MQTT message delay, especially in QoS2 messages causing heavy tail latencies. In addition, the proposed MQTT features showed high attack detection accuracy compared to simply using TCP based features to detect MQTT based attacks. It was also observed that the protocol _eld size and length based features drastically reduced the false positive rates and hence, are suitable for detecting IoT based attacks

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    International overview on the legal framework for highly automated vehicles

    Get PDF
    The evolution of Autonomous and automated technologies during the last decades has been constant and maintained. All of us can remember an old film, in which they shown us a driverless car, and we thought it was just an unreal object born of filmmakers imagination. However, nowadays Highly Automated Vehicles are a reality, even not in our daily lives. Hardly a day we don’t have news about Tesla launching a new model or Google showing the new features of their autonomous car. But don’t have to travel far away from our borders. Here in Europe we also can find different companies trying, with more or less success depending on with, not to be lagged behind in this race. But today their biggest problem is not only the liability of their innovative technology, but also the legal framework for Highly Automated Vehicles. As a quick summary, in only a few countries they have testing licenses, which not allow them to freely drive, and to the contrary most nearly ban their use. The next milestone in autonomous driving is to build and homogeneous, safe and global legal framework. With this in mind, this paper presents an international overview on the legal framework for Highly Automated Vehicles. We also present de different issues that such technologies have to face to and which they have to overcome in the next years to be a real and daily technology

    On the Generation of Cyber Threat Intelligence: Malware and Network Traffic Analyses

    Get PDF
    In recent years, malware authors drastically changed their course on the subject of threat design and implementation. Malware authors, namely, hackers or cyber-terrorists perpetrate new forms of cyber-crimes involving more innovative hacking techniques. Being motivated by financial or political reasons, attackers target computer systems ranging from personal computers to organizations’ networks to collect and steal sensitive data as well as blackmail, scam people, or scupper IT infrastructures. Accordingly, IT security experts face new challenges, as they need to counter cyber-threats proactively. The challenge takes a continuous allure of a fight, where cyber-criminals are obsessed by the idea of outsmarting security defenses. As such, security experts have to elaborate an effective strategy to counter cyber-criminals. The generation of cyber-threat intelligence is of a paramount importance as stated in the following quote: “the field is owned by who owns the intelligence”. In this thesis, we address the problem of generating timely and relevant cyber-threat intelligence for the purpose of detection, prevention and mitigation of cyber-attacks. To do so, we initiate a research effort, which falls into: First, we analyze prominent cyber-crime toolkits to grasp the inner-secrets and workings of advanced threats. We dissect prominent malware like Zeus and Mariposa botnets to uncover their underlying techniques used to build a networked army of infected machines. Second, we investigate cyber-crime infrastructures, where we elaborate on the generation of a cyber-threat intelligence for situational awareness. We adapt a graph-theoretic approach to study infrastructures used by malware to perpetrate malicious activities. We build a scoring mechanism based on a page ranking algorithm to measure the badness of infrastructures’ elements, i.e., domains, IPs, domain owners, etc. In addition, we use the min-hashing technique to evaluate the level of sharing among cyber-threat infrastructures during a period of one year. Third, we use machine learning techniques to fingerprint malicious IP traffic. By fingerprinting, we mean detecting malicious network flows and their attribution to malware families. This research effort relies on a ground truth collected from the dynamic analysis of malware samples. Finally, we investigate the generation of cyber-threat intelligence from passive DNS streams. To this end, we design and implement a system that generates anomalies from passive DNS traffic. Due to the tremendous nature of DNS data, we build a system on top of a cluster computing framework, namely, Apache Spark [70]. The integrated analytic system has the ability to detect anomalies observed in DNS records, which are potentially generated by widespread cyber-threats

    Low-rate attack detection with intelligent fine-grained network analysis

    Get PDF
    Low-rate attacks are a type of attacks that silently infiltrate the victim network, control computers, and steal sensitive data. As the effect of this attack type is devastating, it is essential to be able to detect such attacks. A detection system allows system administrators to react accordingly. More importantly, when the detection system is to analyse the network traffic, it may identify the malicious activity before the attack reaches the system. And by incorporating machine learning into the detection approach, the Network-based Intrusion Detection System (NIDS) will be able to adapt to evolving attacks and minimise human intervention, unlike signature-based NIDS. Several works have tried to address the problem of low-rate attack detection. However, there are several issues with these previous works. Some of them are dated; therefore their performance drops on contemporary low-rate attacks. Some of them only focus on detecting attacks in one protocol, while low-rate attacks exist on various protocols. To tackle this problem, we proposed two Deep Learning (DL) models which analyse network payload and were trained with the unsupervised approach. Our best performing model surpasses the state-of-the-arts and provides an improvement in detection rate of at least 12.04%. The experiments also show that payload-based NIDSs are superior to header-based ones for identifying low-rate attacks. A common approach in payload-based NIDSs is to read the full-length application layer messages, while in some protocols such as HTTP or SMTP, it is usual to have lengthy messages. Processing the full-length of such messages would be time-consuming. The damage from the attack may have been done by the time the decision for the particular message comes out. Therefore, we proposed an approach that can early predict the occurrence of low-rate attacks from as little information as possible. Based on our experiments, the proposed method can detect 97.57% of attacks by merely reading, on average, 35.21% of the application layer messages. It improves the detection speed by three-fold
    corecore