1,591 research outputs found

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    POISED: Spotting Twitter Spam Off the Beaten Paths

    Get PDF
    Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses. Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns. In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks

    Multi-level network resilience: traffic analysis, anomaly detection and simulation

    Get PDF
    Traffic analysis and anomaly detection have been extensively used to characterize network utilization as well as to identify abnormal network traffic such as malicious attacks. However, so far, techniques for traffic analysis and anomaly detection have been carried out independently, relying on mechanisms and algorithms either in edge or in core networks alone. In this paper we propose the notion of multi-level network resilience, in order to provide a more buy pill robust traffic analysis and anomaly detection architecture, combining mechanisms and algorithms operating in a coordinated fashion both in the edge and in the core networks. This work is motivated by the potential complementarities between the research being developed at IIT Madras and Lancaster University. In this paper we describe the current work being developed at IIT Madras and Lancaster on traffic analysis and anomaly detection, and outline the principles of a multi-level resilience architecture

    Towards Searching for Entangled Photons in the CMB Sky

    Full text link
    We explore the possibility of detecting entangled photon pairs from cosmic microwave background or other cosmological sources coming from two patches of the sky. The measurements use two detectors with different photon polarizer directions. When two photon sources are separated by a large angle relative to the earth, such that each detector has only one photon source in its field of view, a null test of unentangled photons can be performed. The deviation from this unentangled background is, in principle, the signature of photon entanglement. To confirm whether the deviation is consistent with entangled photons, we derive a photon polarization correlation to compare with, similar to that in a Bell inequality measurement. However, since photon coincidence measurement cannot be used to discriminate unentangled cosmic photons, it is unlikely that the correlation expectation value alone can violate Bell inequality to provide the signature for entanglement.Comment: 5 pages, 2 figure; references added, typos fixed. v3 revised version with more discussions on detection possibilities; added references.v4 published version in PR

    A data mining approach for detection of self-propagating worms

    Full text link
    In this paper we demonstrate our signature based detector for self-propagating worms. We use a set of worm and benign traffic traces of several endpoints to build benign and worm profiles. These profiles were arranged into separate n-ary trees. We also demonstrate our anomaly detector that was used to deal with tied matches between worm and benign trees. We analyzed the performance of each detector and also with their integration. Results show that our signature based detector can detect very high true positive. Meanwhile, the anomaly detector did not achieve high true positive. Both detectors, when used independently, suffer high false positive. However, when both detectors were integrated they maintained a high detection rate of true positive and minimized the false positiv

    Multi-level Network Resilience: Traffic Analysis, Anomaly Detection and Simulation

    Get PDF
    Traffic analysis and anomaly detection have been extensively used to characterize network utilization as well as to identify abnormal network traffic such as malicious attacks. However, so far, techniques for traffic analysis and anomaly detection have been carried out independently, relying on mechanisms and algorithms either in edge or in core networks alone. In this paper we propose the notion of multi-level network resilience, in order to provide a more robust traffic analysis and anomaly detection architecture, combining mechanisms and algorithms operating in a coordinated fashion both in the edge and in the core networks. This work is motivated by the potential complementarities between the research being developed at IIT Madras and Lancaster University. In this paper we describe the current work being developed at IIT Madras and Lancaster on traffic analysis and anomaly detection, and outline the principles of a multi-level resilience architecture
    • …
    corecore