7,361 research outputs found

    Centralized prevention of denial of service attacks

    Full text link
    The world has come to depend on the Internet at an increasing rate for communication, e-commerce, and many other essential services. As such, the Internet has become an integral part of the workings of society at large. This has lead to an increased vulnerability to remotely controlled disruption of vital commercial and government operations---with obvious implications. This disruption can be caused by an attack on one or more specific networks which will deny service to legitimate users or an attack on the Internet itself by creating large amounts of spurious traffic (which will deny services to many or all networks). Individual organizations can take steps to protect themselves but this does not solve the problem of an Internet wide attack. This thesis focuses on an analysis of the different types of Denial of Service attacks and suggests an approach to prevent both categories by centralized detection and limitation of excessive packet flows

    Designing an interactive visualization for intrusion detection systems with video game theory and technology

    Get PDF
    With an ever increasing number of attacks on networks that have an even more increasing amount of information being communicated across them, the old means of examining network data for intruders and malicious acts through text no longer works. Even with the help of filters and data aggregation there is too much for a person to read through and get a clear understanding of what is happen across a network, causing security officers to many times miss intrusions. With an overwhelming amount of false alerts from incorrectly setup Intrusion Detection Systems and not enough time to sift through them all, a new means of displaying and interacting with the network data presented by intrusion detection system is needed. That is why there has been an increase in research about how to create visualizations for networks that will allow someone to better understand what is happening across a network. Using previous research as well as a study of the theory and architecture used by the video game industry on interactive environments, it is possible to create an intuitive interactive visual environment of network data that will help network administrators more effectively understand their networks and where potential threats may lurk. Therefore, this proposed design attempts to help solve the problem of network communication comprehension

    Rethinking Security Incident Response: The Integration of Agile Principles

    Get PDF
    In today's globally networked environment, information security incidents can inflict staggering financial losses on organizations. Industry reports indicate that fundamental problems exist with the application of current linear plan-driven security incident response approaches being applied in many organizations. Researchers argue that traditional approaches value containment and eradication over incident learning. While previous security incident response research focused on best practice development, linear plan-driven approaches and the technical aspects of security incident response, very little research investigates the integration of agile principles and practices into the security incident response process. This paper proposes that the integration of disciplined agile principles and practices into the security incident response process is a practical solution to strengthening an organization's security incident response posture.Comment: Paper presented at the 20th Americas Conference on Information Systems (AMCIS 2014), Savannah, Georgi

    Visualization for network forensic analyses: extending the Forensic Log Investigator (FLI)

    Get PDF
    In a network attack investigation, the mountain of information collected from varying sources can be daunting. Investigators face significant challenges in being able to correlate findings from these sources, given difficulties with time synchronization. In addition, it is difficult to obtain summary or overview information for one set of data, much less the entire case. This, in turn, makes it nearly impossible to accurately identify missing information.;Identifying these information gaps is one problem, yet another is filling them in. Investigators must rely on legal processes and requests to obtain the information they need. However, it is extremely important they are aware of cases or events that cross jurisdictional boundaries. Where tools exist to assist in evidence overview, they do not contain the necessary geographic information for investigators to quickly ascertain the location of those involved.;In addition to these difficulties, investigators need to perform several types of analysis on the evidence that has been collected. Several of these analyses cannot typically be performed on data from multiple log files, since they are based on timing data. Furthermore, it is difficult to understand results from these analyses without visual representation, and there are no tools to bring them together in a single frame.;This thesis details the design and implementation of an analysis and visualization extension for the Forensic Log Investigator, or FLI. FLI is a web-based analysis and visualization architecture built on advanced technologies and enterprise infrastructure. This extension assists investigators by providing the ability to correlate evidence and analysis across traditional log file and analysis method boundaries, identify information gaps, and perform analysis in accordance with published evidence handling guidelines

    The zombies strike back: Towards client-side beef detection

    Get PDF
    A web browser is an application that comes bundled with every consumer operating system, including both desktop and mobile platforms. A modern web browser is complex software that has access to system-level features, includes various plugins and requires the availability of an Internet connection. Like any multifaceted software products, web browsers are prone to numerous vulnerabilities. Exploitation of these vulnerabilities can result in destructive consequences ranging from identity theft to network infrastructure damage. BeEF, the Browser Exploitation Framework, allows taking advantage of these vulnerabilities to launch a diverse range of readily available attacks from within the browser context. Existing defensive approaches aimed at hardening network perimeters and detecting common threats based on traffic analysis have not been found successful in the context of BeEF detection. This paper presents a proof-of-concept approach to BeEF detection in its own operating environment – the web browser – based on global context monitoring, abstract syntax tree fingerprinting and real-time network traffic analysis

    Are Intrusion Detection Studies Evaluated Consistently? A Systematic Literature Review

    Get PDF
    Cyberinfrastructure is increasingly becoming target of a wide spectrum of attacks from Denial of Service to large-scale defacement of the digital presence of an organization. Intrusion Detection System (IDSs) provide administrators a defensive edge over intruders lodging such malicious attacks. However, with the sheer number of different IDSs available, one has to objectively assess the capabilities of different IDSs to select an IDS that meets specific organizational requirements. A prerequisite to enable such an objective assessment is the implicit comparability of IDS literature. In this study, we review IDS literature to understand the implicit comparability of IDS literature from the perspective of metrics used in the empirical evaluation of the IDS. We identified 22 metrics commonly used in the empirical evaluation of IDS and constructed search terms to retrieve papers that mention the metric. We manually reviewed a sample of 495 papers and found 159 of them to be relevant. We then estimated the number of relevant papers in the entire set of papers retrieved from IEEE. We found that, in the evaluation of IDSs, multiple different metrics are used and the trade-off between metrics is rarely considered. In a retrospective analysis of the IDS literature, we found the the evaluation criteria has been improving over time, albeit marginally. The inconsistencies in the use of evaluation metrics may not enable direct comparison of one IDS to another

    Detection and Prediction of Distributed Denial of Service Attacks using Deep Learning

    Get PDF
    Distributed denial of service attacks threaten the security and health of the Internet. These attacks continue to grow in scale and potency. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. The constant need to stay one step ahead of attackers using signatures demonstrates a clear need for better methods of detecting DDoS attacks. In this research, we examine the application of machine learning models to real network data for the purpose of classifying attacks. During training, the models build a representation of their input data. This eliminates any reliance on attack signatures and allows for accurate classification of attacks even when they are slightly modified to evade detection. In the course of our research, we found a significant problem when applying conventional machine learning models. Network traffic, whether benign or malicious, is temporal in nature. This results in differences in its characteristics between any significant time span. These differences cause conventional models to fail at classifying the traffic. We then turned to deep learning models. We obtained a significant improvement in performance, regardless of time span. In this research, we also introduce a new method of transforming traffic data into spectrogram images. This technique provides a way to better distinguish different types of traffic. Finally, we introduce a framework for embedding attack detection in real-world applications

    Data mining approaches for detecting intrusion using UNIX process execution traces

    Get PDF
    Intrusion detection systems help computer systems prepare for and deal with malicious attacks. They collect information from a variety of systems and network sources, then analyze the information for signs of intrusion and misuse. A variety of techniques have been employed to analyze the information from traditional statistical methods to new emerged data mining approaches. In this thesis, we describe several algorithms designed for this task, including neural networks, rule induction with C4.5, and Rough sets methods. We compare the classification accuracy of the various methods in a set of UNIX process execution traces. We used two kinds of evaluation methods. The first evaluation criterion characterizes performances over a set of individual classifications in terms of average testing accuracy rate. The second measures the true and false positive rates of the classification output over certain threshold. Experiments were run on data sets of system calls created by synthetic sendmail programs. There were two types of representation methods used. Different combinations of parameters were tested during the experiment. Results indicate that for a wide range of conditions, Rough sets have higher classification accuracy than that of Neural networks and C4.5. In terms of true and false positive evaluations, Rough sets and Neural networks turned out to be better than C4.5

    Generating background network traffic for network security testbeds

    Get PDF
    With the advancement of science and technology, there has been a rapid growth in computer network attacks. Most of them are in the form of sophisticated and smart attacks, which are hard to trace. Although researchers have been working on this issue - attack detection, prevention and mitigation - the existing network security evaluation techniques lack effective experimental infrastructure and rigorous scientific methodologies for developing and testing the cyber security technologies. To make progress in this area, we need to address one of the major shortcomings in evaluating network security mechanisms -- lack of relevant, representative network data. The research community is in need of tools that are able to generate scalable, tunable, and representative network traffic. Such tools are vital in a tested environment, where they can be used to evaluate the behavior and performance of security related tools. In this context, we present the Markov Traffic Generator (MTG), which is able to generate representative network traffic. The MTG follows a unique approach of generating background traffic at the session level, unlike the previous approaches operated on the packet level. The tool is application dependent and is able to generate various types of TCP traffic. The resulting tool is useful for researchers and developers in building, testing and evaluating cyber security related tools. In this work, we develop the classifications of background traffic generation models based on the past work and present a new toolkit, the Markov Traffic Generator (MTG). As opposed to past work, MTG uses a first order hierarchical Markov agent to generate background user behavior in network testbed. The Markov agents can be used to generate behavior that mimics observed traffic in real networks. The thesis concludes by showing that MTG can realistically replicate observed network behavior

    Feature selection and visualization techniques for network anomaly detector

    Get PDF
    Intrusion detection systems have been widely used as burglar alarms in the computer security field. There are two major types of detection techniques: misuse detection and anomaly detection. Although misuse detection can detect known attacks with lower false positive rate, anomaly detection is capable of detecting any new or varied attempted intrusion as long as the attempted intrusions disturb the normal states of the systems. The network anomaly detector is employed to monitor a segment of network for any suspicious activities based on the sniffered network traffic. The fast speed of network and wide use of encryption techniques make it almost unpractical to read payload information for the network anomaly detector. This work tries to answer the question: What are the best features for network anomaly detector? The main experiment data sets are from 1999 DARPA Lincoln Library off-line intrusion evaluation project since it is still the most comprehensive public benchmark data up to today. Firstly, 43 features of different levels and protocols are defined. Using the first three weeks as training data and last two weeks as testing data, the performance of the features are testified by using 5 different classifiers. Secondly, the feasibility of feature selection is investigated by employing some filter and wrapper techniques such as Correlation Feature Selection, etc. Thirdly, the effect of changing overlap and time window for the network anomaly detector is investigated. At last, GGobi and Mineset are utilized to visualize intrusion detections to save time and effort for system administrators. The results show the capability of our features is not limited to probing attacks and denial of service attacks. They can also detect remote to local attacks and backdoors. The feature selection techniques successfully reduce the dimensionality of the features from 43 to 10 without performance degrading. The three dimensional visualization pictures provide a straightforward view of normal network traffic and malicious attacks. The time plot of key features can be used to aid system administrators to quickly locate the possible intrusions
    corecore