287 research outputs found

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    Advanced Design Architecture for Network Intrusion Detection using Data Mining and Network Performance Exploration

    Get PDF
    The primary goal of an Intrusion Detection System (IDS) is to identify intruders and differentiate anomalous network activity from normal one. Intrusion detection has become a significant component of network security administration due to the enormous number of attacks persistently threaten our computer networks and systems. Traditional Network IDS are limited and do not provide a comprehensive solution for these serious problems which are causing the many types security breaches and IT service impacts. They search for potential malicious abnormal activities on the network traffics; they sometimes succeed to find true network attacks and anomalies (true positive). However, in many cases, systems fail to detect malicious network behaviors (false negative) or they fire alarms when nothing wrong in the network (false positive). In accumulation, they also require extensive and meticulous manual processing and interference. Hence applying Data Mining (DM) techniques on the network traffic data is a potential solution that helps in design and develops better efficient intrusion detection systems. Data mining methods have been used build automatic intrusion detection systems. The central idea is to utilize auditing programs to extract set of features that describe each network connection or session, and apply data mining programs to learn that capture intrusive and non-intrusive behavior. In addition, Network Performance Analysis (NPA) is also an effective methodology to be applied for intrusion detection. In this research paper, we discuss DM and NPA Techniques for network intrusion detection and propose that an integration of both approaches have the potential to detect intrusions in networks more effectively and increases accuracy

    A Review on Features’ Robustness in High Diversity Mobile Traffic Classifications

    Get PDF
    Mobile traffics are becoming more dominant due to growing usage of mobile devices and proliferation of IoT. The influx of mobile traffics introduce some new challenges in traffic classifications; namely the diversity complexity and behavioral dynamism complexity. Existing traffic classifications methods are designed for classifying standard protocols and user applications with more deterministic behaviors in small diversity. Currently, flow statistics, payload signature and heuristic traffic attributes are some of the most effective features used to discriminate traffic classes. In this paper, we investigate the correlations of these features to the less-deterministic user application traffic classes based on corresponding classification accuracy. Then, we evaluate the impact of large-scale classification on feature's robustness based on sign of diminishing accuracy. Our experimental results consolidate the needs for unsupervised feature learning to address the dynamism of mobile application behavioral traits for accurate classification on rapidly growing mobile traffics

    Process Flow Features as a Host-based Event Knowledge Representation

    Get PDF
    The detection of malware is of great importance but even non-malicious software can be used for malicious purposes. Monitoring processes and their associated information can characterize normal behavior and help identify malicious processes or malicious use of normal process by measuring deviations from the learned baseline. This exploratory research describes a novel host feature generation process that calculates statistics of an executing process during a window of time called a process flow. Process flows are calculated from key process data structures extracted from computer memory using virtual machine introspection. Each flow cluster generated using k-means of the flow features represents a behavior where the members of the cluster all exhibit similar behavior. Testing explores associations between behavior and process flows that in the future may be useful for detecting unauthorized behavior or behavioral trends on a host. Analysis of two data collections demonstrate that this novel way of thinking of process behavior as process flows can produce baseline models in the form of clusters that do represent specific behaviors

    A Survey on Enterprise Network Security: Asset Behavioral Monitoring and Distributed Attack Detection

    Full text link
    Enterprise networks that host valuable assets and services are popular and frequent targets of distributed network attacks. In order to cope with the ever-increasing threats, industrial and research communities develop systems and methods to monitor the behaviors of their assets and protect them from critical attacks. In this paper, we systematically survey related research articles and industrial systems to highlight the current status of this arms race in enterprise network security. First, we discuss the taxonomy of distributed network attacks on enterprise assets, including distributed denial-of-service (DDoS) and reconnaissance attacks. Second, we review existing methods in monitoring and classifying network behavior of enterprise hosts to verify their benign activities and isolate potential anomalies. Third, state-of-the-art detection methods for distributed network attacks sourced from external attackers are elaborated, highlighting their merits and bottlenecks. Fourth, as programmable networks and machine learning (ML) techniques are increasingly becoming adopted by the community, their current applications in network security are discussed. Finally, we highlight several research gaps on enterprise network security to inspire future research.Comment: Journal paper submitted to Elseive

    An Innovative Signature Detection System for Polymorphic and Monomorphic Internet Worms Detection and Containment

    Get PDF
    Most current anti-worm systems and intrusion-detection systems use signature-based technology instead of anomaly-based technology. Signature-based technology can only detect known attacks with identified signatures. Existing anti-worm systems cannot detect unknown Internet scanning worms automatically because these systems do not depend upon worm behaviour but upon the worm’s signature. Most detection algorithms used in current detection systems target only monomorphic worm payloads and offer no defence against polymorphic worms, which changes the payload dynamically. Anomaly detection systems can detect unknown worms but usually suffer from a high false alarm rate. Detecting unknown worms is challenging, and the worm defence must be automated because worms spread quickly and can flood the Internet in a short time. This research proposes an accurate, robust and fast technique to detect and contain Internet worms (monomorphic and polymorphic). The detection technique uses specific failure connection statuses on specific protocols such as UDP, TCP, ICMP, TCP slow scanning and stealth scanning as characteristics of the worms. Whereas the containment utilizes flags and labels of the segment header and the source and destination ports to generate the traffic signature of the worms. Experiments using eight different worms (monomorphic and polymorphic) in a testbed environment were conducted to verify the performance of the proposed technique. The experiment results showed that the proposed technique could detect stealth scanning up to 30 times faster than the technique proposed by another researcher and had no false-positive alarms for all scanning detection cases. The experiments showed the proposed technique was capable of containing the worm because of the traffic signature’s uniqueness

    Hyp3rArmor: reducing web application exposure to automated attacks

    Full text link
    Web applications (webapps) are subjected constantly to automated, opportunistic attacks from autonomous robots (bots) engaged in reconnaissance to discover victims that may be vulnerable to specific exploits. This is a typical behavior found in botnet recruitment, worm propagation, largescale fingerprinting and vulnerability scanners. Most anti-bot techniques are deployed at the application layer, thus leaving the network stack of the webapp’s server exposed. In this paper we present a mechanism called Hyp3rArmor, that addresses this vulnerability by minimizing the webapp’s attack surface exposed to automated opportunistic attackers, for JavaScriptenabled web browser clients. Our solution uses port knocking to eliminate the webapp’s visible network footprint. Clients of the webapp are directed to a visible static web server to obtain JavaScript that authenticates the client to the webapp server (using port knocking) before making any requests to the webapp. Our implementation of Hyp3rArmor, which is compatible with all webapp architectures, has been deployed and used to defend single and multi-page websites on the Internet for 114 days. During this time period the static web server observed 964 attempted attacks that were deflected from the webapp, which was only accessed by authenticated clients. Our evaluation shows that in most cases client-side overheads were negligible and that server-side overheads were minimal. Hyp3rArmor is ideal for critical systems and legacy applications that must be accessible on the Internet. Additionally Hyp3rArmor is composable with other security tools, adding an additional layer to a defense in depth approach.This work has been supported by the National Science Foundation (NSF) awards #1430145, #1414119, and #1012798
    • …
    corecore