27 research outputs found

    Packet analysis for network forensics: A comprehensive survey

    Get PDF
    Packet analysis is a primary traceback technique in network forensics, which, providing that the packet details captured are sufficiently detailed, can play back even the entire network traffic for a particular point in time. This can be used to find traces of nefarious online behavior, data breaches, unauthorized website access, malware infection, and intrusion attempts, and to reconstruct image files, documents, email attachments, etc. sent over the network. This paper is a comprehensive survey of the utilization of packet analysis, including deep packet inspection, in network forensics, and provides a review of AI-powered packet analysis methods with advanced network traffic classification and pattern identification capabilities. Considering that not all network information can be used in court, the types of digital evidence that might be admissible are detailed. The properties of both hardware appliances and packet analyzer software are reviewed from the perspective of their potential use in network forensics

    Network Traffic Capturing With Application Tags

    Get PDF
    Zachytávanie sieťovej prevádzky a jej následná analýza sú užitočné v prípade, že hľadáme problémy v sieti alebo sa chceme dozvedieť viac o aplikáciach a ich sieťovej komunikácii. Táto práca sa zameriava na proces identifikácie sieťových aplikácií, ktoré sú spustené na lokálnom počítači a ich asociáci so zachytenými paketmi. Cieľom projektu je vytvorenie multi-platformového nástroja, ktorý zachytí sieťovú komunikáciu do súboru a pridá k nej aplikačné tagy, čo sú rozpoznané aplikácie a identifikácia ich paketov. Operácie, ktoré môžu byť vykonávané samostatne sú paralelizované pre zrýchlenie spracovania paketov, a teda aj zníženie ich strátovosti. Zdrojová aplikácia je zisťovaná pre všetky (prichádzajúce aj odchádzajúce) pakety. Všetky identifikované aplikácie sú uložené v aplikačnej cache spolu s informáciami o jej soketoch pre ušetrenie času nevyhľadávaním už zistených aplikácií. Je dôležité túto cache pravidelne aktualizovať, pretože komunikujúca aplikácia môže zatvoriť soket v  ľubovoľnom čase. Nakoniec sú získané informácie vložené na koniec pcap-ng súboru ako samostatný pcap-ng blok.Network traffic capture and analysis are useful in case we are looking for problems in our network, or when we want to know more about applications and their network communication. This paper aims on the process of network applications identification that run on the local host and their associating with captured packets. The goal of this project is to design a multi-platform application that captures network traffic and extends the capture file with application tags. Operations that can be done independently are parallelized to speed up packet processing and reduce packet loss. An application is being determined for every (both incoming and outgoing) packet. Records of all identified applications are stored in an application cache with information about its sockets to save time and not to search for already known applications. It's important to update the cache periodically because an application in the cache may close a connection at any time. Finally, gathered information is saved to the end of pcap-ng file as a separate pcap-ng block.

    Exploring Fog of War Concepts in Wargame Scenarios

    Get PDF
    This thesis explores fog of war concepts through three submitted journal articles. The Department of Defense and U.S. Air Force are attempting to analyze war scenarios to aid the decision-making process; fog modeling improves realism in these wargame scenarios. The first article Navigating an Enemy Contested Area with a Parallel Search Algorithm [1] investigates a parallel algorithm\u27s speedup, compared to the sequential implementation, with varying map configurations in a tile-based wargame. The parallel speedup tends to exceed 50 but in certain situations. The sequential algorithm outperforms it depending on the configuration of enemy location and amount on the map. The second article Modeling Fog of War Effects in AFSIM [2] introduces the FAT for the AFSIM to introduce and manipulate fog in wargame scenarios. FAT integrates into AFSIM version 2.7.0 and scenario results verify the tool\u27s fog effects for positioning error, hits, and probability affect the success rate. The third article Applying Fog Analysis Tool to AFSIM Multi-Domain CLASS scenarios [3] furthers the verification of FAT to introduce fog across all war fighting domains using a set of CLASS scenarios. The success rate trends with fog impact for each domain scenario support FAT\u27s effectiveness in disrupting the decision-making process for multi-domain operations. The three articles demonstrate fog can affect search, tasking, and decision-making processes for various types of wargame scenarios. The capabilities introduced in this thesis support wargame analysts to improve decision-making in AFSIM military scenarios

    Firewall monitoring using intrusion detection systems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2005Includes bibliographical references (leaves: 79-81)Text in English Abstract: Turkish and Englishviii,79 leavesMost organizations have intranet, they know the benefits of connecting their private LAN to the Internet. However, Internet is inherently an insecure network. That makes the security of the computer systems an imported problem. The first step of network security is firewalls. Firewalls are used to protect internal networks from external attacks through restricting network access according to the rules. The firewall must apply previously defined rules to each packet reaching to its network interface. If the application of rules are prohibited due to malfunction or hacking, internal network may be open to attacks and this situation should be recovered as fast as possible. In order to be sure about the firewall working properly, we proposed to use Intrusion Detection Systems (IDS)to monitor firewall operation. The architecture of our experimental environment is composed of a firewall and two IDSs. One IDS is between external network and firewall, while the other is between firewall and private network. Those two IDSs are invisible to the both networks and they send their information to a monitoring server, which decides, based on two observations, whether the firewall is working properly or not

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF

    Unmanned Aircraft Systems in the Cyber Domain

    Get PDF
    Unmanned Aircraft Systems are an integral part of the US national critical infrastructure. The authors have endeavored to bring a breadth and quality of information to the reader that is unparalleled in the unclassified sphere. This textbook will fully immerse and engage the reader / student in the cyber-security considerations of this rapidly emerging technology that we know as unmanned aircraft systems (UAS). The first edition topics covered National Airspace (NAS) policy issues, information security (INFOSEC), UAS vulnerabilities in key systems (Sense and Avoid / SCADA), navigation and collision avoidance systems, stealth design, intelligence, surveillance and reconnaissance (ISR) platforms; weapons systems security; electronic warfare considerations; data-links, jamming, operational vulnerabilities and still-emerging political scenarios that affect US military / commercial decisions. This second edition discusses state-of-the-art technology issues facing US UAS designers. It focuses on counter unmanned aircraft systems (C-UAS) – especially research designed to mitigate and terminate threats by SWARMS. Topics include high-altitude platforms (HAPS) for wireless communications; C-UAS and large scale threats; acoustic countermeasures against SWARMS and building an Identify Friend or Foe (IFF) acoustic library; updates to the legal / regulatory landscape; UAS proliferation along the Chinese New Silk Road Sea / Land routes; and ethics in this new age of autonomous systems and artificial intelligence (AI).https://newprairiepress.org/ebooks/1027/thumbnail.jp

    Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems

    Full text link
    The demand for processing power is increasing steadily. In the past, single processor architectures clearly dominated the markets. As instruction level parallelism is limited in most applications, significant performance can only be achieved in the future by exploiting parallelism at the higher levels of thread or process parallelism. As a consequence, modern “processors” incorporate multiple processor cores that form a single shared memory multiprocessor. In such systems, high performance devices like network interface controllers are connected to processors and memory like every other input/output device over a hierarchy of peripheral interconnects. Thus, one target must be to couple coprocessors physically closer to main memory and to the processors of a computing node. This removes the overhead of today’s peripheral interconnect structures. Such a step is the direct connection of HyperTransport (HT) devices to Opteron processors, which is presented in this thesis. Also, this work analyzes how communication from a device to processors can be optimized on the protocol level. As today’s computing nodes are shared memory systems, the cache coherence protocol is the central protocol for data exchange between processors and devices. Consequently, the analysis extends to classes of devices that are cache coherence protocol aware. Also, the concept of a transfer cache is proposed in this thesis, which reduces latency significantly even for non-coherent devices. The trend to the exploitation of process and thread level parallelism leads to a steady increase of system sizes. Networks that are used in such large systems are very susceptible to both hard and transient faults. Most transient fault rates are constant per bit that is stored or transmitted. With increasing system sizes and higher clock frequencies, the number of faults in time increases drastically. In the end, the error rate may rise at a level where high level error recovery becomes too costly if lower layers do not perform error correction that is transparent to the layers above. The second part of this thesis describes a direct interconnection network that provides a reliable transport service even without the use of end-to-end protocols. Also, a novel hardware based solution for intermediate routing is developed in this thesis, which allows an efficient, deadlock free routing around faulty links

    NETWORK TRAFFIC CHARACTERIZATION AND INTRUSION DETECTION IN BUILDING AUTOMATION SYSTEMS

    Get PDF
    The goal of this research was threefold: (1) to learn the operational trends and behaviors of a realworld building automation system (BAS) network for creating building device models to detect anomalous behaviors and attacks, (2) to design a framework for evaluating BA device security from both the device and network perspectives, and (3) to leverage new sources of building automation device documentation for developing robust network security rules for BAS intrusion detection systems (IDSs). These goals were achieved in three phases, first through the detailed longitudinal study and characterization of a real university campus building automation network (BAN) and with the application of machine learning techniques on field level traffic for anomaly detection. Next, through the systematization of literature in the BAS security domain to analyze cross protocol device vulnerabilities, attacks, and defenses for uncovering research gaps as the foundational basis of our proposed BA device security evaluation framework. Then, to evaluate our proposed framework the largest multiprotocol BAS testbed discussed in the literature was built and several side-channel vulnerabilities and software/firmware shortcomings were exposed. Finally, through the development of a semi-automated specification gathering, device documentation extracting, IDS rule generating framework that leveraged PICS files and BIM models.Ph.D
    corecore