793 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Towards real-time intrusion detection for NetFlow and IPFIX

    Get PDF
    DDoS attacks bring serious economic and technical damage to networks and enterprises. Timely detection and mitigation are therefore of great importance. However, when flow monitoring systems are used for intrusion detection, as it is often the case in campus, enterprise and backbone networks, timely data analysis is constrained by the architecture of NetFlow and IPFIX. In their current architecture, the analysis is performed after certain timeouts, which generally delays the intrusion detection for several minutes. This paper presents a functional extension for both NetFlow and IPFIX flow exporters, to allow for timely intrusion detection and mitigation of large flooding attacks. The contribution of this paper is threefold. First, we integrate a lightweight intrusion detection module into a flow exporter, which moves detection closer to the traffic observation point. Second, our approach mitigates attacks in near real-time by instructing firewalls to filter malicious traffic. Third, we filter flow data of malicious traffic to prevent flow collectors from overload. We validate our approach by means of a prototype that has been deployed on a backbone link of the Czech national research and education network CESNET

    Firewall Policy Diagram: Novel Data Structures and Algorithms for Modeling, Analysis, and Comprehension of Network Firewalls

    Get PDF
    Firewalls, network devices, and the access control lists that manage traffic are very important components of modern networking from a security and regulatory perspective. When computers were first connected, they were communicating with trusted peers and nefarious intentions were neither recognized nor important. However, as the reach of networks expanded, systems could no longer be certain whether the peer could be trusted or that their intentions were good. Therefore, a couple of decades ago, near the widespread adoption of the Internet, a new network device became a very important part of the landscape, i.e., the firewall with the access control list (ACL) router. These devices became the sentries to an organization's internal network, still allowing some communication; however, in a controlled and audited manner. It was during this time that the widespread expansion of the firewall spawned significant research into the science of deterministically controlling access, as fast as possible. However, the success of the firewall in securing the enterprise led to an ever increasing complexity in the firewall as the networks became more inter-connected. Over time, the complexity has continued to increase, yielding a difficulty in understanding the allowed access of a particular device. As a result of this success, firewalls are one of the most important devices used in network security. They provide the protection between networks that only wish to communicate over an explicit set of channels, expressed through the protocols, traveling over the network. These explicit channels are described and implemented in a firewall using a set of rules, where the firewall implements the will of the organization through these rules, also called a firewall policy. In small test environments and networks, firewall policies may be easy to comprehend and understand; however, in real world organizations these devices and policies must be capable of handling large amounts of traffic traversing hundreds or thousands of rules in a particular policy. Added to that complexity is the tendency of a policy to grow substantially more complex over time; and the result is often unintended mistakes in comprehending the complex policy, possibly leading to security breaches. Therefore, the need for an organization to unerringly and deterministically understand what traffic is allowed through a firewall, while being presented with hundreds or thousands of rules and routes, is imperative. In addition to the local security policy represented in a firewall, the modern firewall and filtering router involve more than simply deciding if a packet should pass through a security policy. Routing decisions through multiple network interfaces involving vendor-specific constructs such as zones, domains, virtual routing tables, and multiple security policies have become the more common type of device found in the industry today. In the past, network devices were separated by functional area (ACL, router, switch, etc.). The more recent trend has been for these capabilities to converge and blend creating a device that goes far beyond the straight-forward access control list. This dissertation investigates the comprehension of traffic flow through these complex devices by focusing on the following research topics: - Expands on how a security policy may be processed by decoupling the original rules from the policy, and instead allow a holistic understanding of the solution space being represented. This means taking a set of constraints on access (i.e., firewall rules), synthesizing them into a model that represents an accept and deny space that can be quickly and accurately analyzed. - Introduces a new set of data structures and algorithms collectively referred to as a Firewall Policy Diagram (FPD). A structure that is capable of modeling Internet Protocol version 4 packet (IPv4) solution space in memory efficient, mathematically set-based entities. Using the FPD we are capable of answering difficult questions such as: what access is allowed by one policy over another, what is the difference in spaces, and how to efficiently parse the data structure that represents the large search space. The search space can be as large as 288; representing the total values available to the source IP address (232), destination IP address (232), destination port (216), and protocol (28). The fields represent the available bits of an IPv4 packet as defined by the Open Systems Interconnection (OSI) model. Notably, only the header fields that are necessary for this research are taken into account and not every available IPv4 header value. - Presents a concise, precise, and descriptive language called Firewall Policy Query Language (FPQL) as a mechanism to explore the space. FPQL is a Backus Normal Form (Backus-Naur Form) (BNF) compatible notation for a query language to do just that sort of exploration. It looks to translate concise representations of what the end user needs to know about the solution space, and extract the information from the underlying data structures. - Finally, this dissertation presents a behavioral model of the capabilities found in firewall type devices and a process for taking vendor-specific nuances to a common implementation. This includes understanding interfaces, routes, rules, translation, and policies; and modeling them in a consistent manner such that the many different vendor implementations may be compared to each other

    SQL Injection Vulnerability Detection Using Deep Learning: A Feature-based Approach

    Get PDF
    SQL injection (SQLi), a well-known exploitation technique, is a serious risk factor for database-driven web applications that are used to manage the core business functions of organizations. SQLi enables an unauthorized user to get access to sensitive information of the database, and subsequently, to the application’s administrative privileges. Therefore, the detection of SQLi is crucial for businesses to prevent financial losses. There are different rules and learning-based solutions to help with detection, and pattern recognition through support vector machines (SVMs) and random forest (RF) have recently become popular in detecting SQLi. However, these classifiers ensure 97.33% accuracy with our dataset. In this paper, we propose a deep learning-based solution for detecting SQLi in web applications. The solution employs both correlation and chi-squared methods to rank the features from the dataset. Feed-forward network approach has been applied not only in feature selection but also in the detection process. Our solution provides 98.04% accuracy over 1,850+ recorded datasets, where it proves its superior efficiency among other existing machine learning solutions

    Digital twins configurator for HIL simulations

    Get PDF

    MULTI-DIMENSIONAL PROFILING OF CYBER THREATS FOR LARGE-SCALE NETWORKS

    Get PDF
    Current multi-domain command and control computer networks require significant oversight to ensure acceptable levels of security. Firewalls are the proactive security management tool at the network’s edge to determine malicious and benign traffic classes. This work aims to develop machine learning algorithms through deep learning and semi-supervised clustering, to enable the profiling of potential threats through network traffic analysis within large-scale networks. This research accomplishes these objectives by analyzing enterprise network data at the packet level using deep learning to classify traffic patterns. In addition, this work examines the efficacy of several machine learning model types and multiple imbalanced data handling techniques. This work also incorporates packet streams for identifying and classifying user behaviors. Tests of the packet classification models demonstrated that deep learning is sensitive to malicious traffic but underperforms in identifying allowed traffic compared to traditional algorithms. However, imbalanced data handling techniques provide performance benefits to some deep learning models. Conversely, semi-supervised clustering accurately identified and classified multiple user behaviors. These models provide an automated tool to learn and predict future traffic patterns. Applying these techniques within large-scale networks detect abnormalities faster and gives network operators greater awareness of user traffic.Outstanding ThesisCaptain, United States Marine CorpsApproved for public release. Distribution is unlimited

    Expert Performance in System Administration

    Get PDF
    System administration is a traditional and demanding profession in information technology that has gained little attention from human-computer interaction (HCI) research. System administrators operate in a highly complex environment to keep business applications running and data available and safe. In order to understand the essence of system administrators' skill, this thesis reports individual differences in 20 professional system administrators’ task performance, task solutions, verbal reports, and learning histories. A set of representative tasks were designed to measure individual differences, and structured interviews were used to collect retrospective information about system administrators’ skill acquisition and level of deliberate practice. Based on the measured performance, the participants were divided into three performance groups. A group of five system administrators stood out from the 20 participants. They completed more tasks successfully, they were faster, they predicted their success more accurately, and they expressed more confidence during performance and anticipation. Although they had extensive professional experience, the study found no relationship between duration of experience and level of expertise. The results are aligned with expert-performance research from other domains — the highest levels of performance in system administration are attained as a result of a systematic practice. This involves an investment of effort and makes the activity less enjoyable than competing activities. When studying the learning histories, the quantity and quality of the programming experience and other high-effort computer-related problem-solving activities were found to be the main differentiating factors between the 'expert' and less-accomplished participants

    Cyber Security Network Anomaly Detection and Visualization

    Get PDF
    This MQP presents a novel anomaly detection system for computer network traffic, as well as a visualization system to help users explore the results of the anomaly detection. The detection algorithm uses a novel approach to Robust Principal Component Analysis, to produce a lower dimensional subspace of the original data, for which a random forest can be applied to predict anomalies. The visualization system has been designed to help cyber security analysts sort anomalies by attribute and view them in the context of normal network activity. The system consists of an overview of firewall logs, a detail view of each log, and a feature view where an analyst can see which features of the firewall log were implicated in the anomaly detection algorithm

    Cyber Security Network Anomaly Detection and Visualization

    Get PDF
    This MQP presents a novel anomaly detection system for computer network traffic, as well as a visualization system to help users explore the results of the anomaly detection. The detection algorithm uses a novel approach to Robust Principal Component Analysis, to produce a lower dimensional subspace of the original data, for which a random forest can be applied to predict anomalies. The visualization system has been designed to help cyber security analysts sort anomalies by attribute and view them in the context of normal network activity. The system consists of an overview of firewall logs, a detail view of each log, and a feature view where an analyst can see which features of the firewall log were implicated in the anomaly detection algorithm
    • …
    corecore