74 research outputs found

    Developing an Advanced IPv6 Evasion Attack Detection Framework

    Get PDF
    Internet Protocol Version 6 (IPv6) is the most recent generation of Internet protocol. The transition from the current Internet Version 4 (IPv4) to IPv6 raised new issues and the most crucial issue is security vulnerabilities. Most vulnerabilities are common between IPv4 and IPv6, e.g. Evasion attack, Distributed Denial of Service (DDOS) and Fragmentation attack. According to the IPv6 RFC (Request for Comment) recommendations, there are potential attacks against various Operating Systems. Discrepancies between the behaviour of several Operating Systems can lead to Intrusion Detection System (IDS) evasion, Firewall evasion, Operating System fingerprint, Network Mapping, DoS/DDoS attack and Remote code execution attack. We investigated some of the security issues on IPv6 by reviewing existing solutions and methods and performed tests on two open source Network Intrusion Detection Systems (NIDSs) which are Snort and Suricata against some of IPv6 evasions and attack methods. The results show that both NIDSs are unable to detect most of the methods that are used to evade detection. This thesis presents a detection framework specifically developed for IPv6 network to detect evasion, insertion and DoS attacks when using IPv6 Extension Headers and Fragmentation. We implemented the proposed theoretical solution into a proposed framework for evaluation tests. To develop the framework, “dpkt” module is employed to capture and decode the packet. During the development phase, a bug on the module used to parse/decode packets has been found and a patch provided for the module to decode the IPv6 packet correctly. The standard unpack function included in the “ip6” section of the “dpkt” package follows extension headers which means following its parsing, one has no access to all the extension headers in their original order. By defining, a new field called all_extension_headers and adding each header to it before it is moved along allows us to have access to all the extension headers while keeping the original parse speed of the framework virtually untouched. The extra memory footprint from this is also negligible as it will be a linear fraction of the size of the whole set of packet. By decoding the packet, extracting data from packet and evaluating the data with user-defined value, the proposed framework is able to detect IPv6 Evasion, Insertion and DoS attacks. The proposed framework consists of four layers. The first layer captures the network traffic and passes it to second layer for packet decoding which is the most important part of the detection process. It is because, if NIDS could not decode and extract the packet content, it would not be able to pass correct information into the Detection Engine process for detection. Once the packet has been decoded by the decoding process, the decoded packet will be sent to the third layer which is the brain of the proposed solution to make a decision by evaluating the information with the defined value to see whether the packet is threatened or not. This layer is called the Detection Engine. Once the packet(s) has been examined by detection processes, the result will be sent to output layer. If the packet matches with a type or signature that system admin chose, it raises an alarm and automatically logs all details of the packet and saves it for system admin for further investigation. We evaluated the proposed framework and its subsequent process via numerous experiments. The results of these conclude that the proposed framework, called NOPO framework, is able to offer better detection in terms of accuracy, with a more accurate packet decoding process, and reduced resources usage compared to both exciting NIDs

    End-to-end security in active networks

    Get PDF
    Active network solutions have been proposed to many of the problems caused by the increasing heterogeneity of the Internet. These ystems allow nodes within the network to process data passing through in several ways. Allowing code from various sources to run on routers introduces numerous security concerns that have been addressed by research into safe languages, restricted execution environments, and other related areas. But little attention has been paid to an even more critical question: the effect on end-to-end security of active flow manipulation. This thesis first examines the threat model implicit in active networks. It develops a framework of security protocols in use at various layers of the networking stack, and their utility to multimedia transport and flow processing, and asks if it is reasonable to give active routers access to the plaintext of these flows. After considering the various security problem introduced, such as vulnerability to attacks on intermediaries or coercion, it concludes not. We then ask if active network systems can be built that maintain end-to-end security without seriously degrading the functionality they provide. We describe the design and analysis of three such protocols: a distributed packet filtering system that can be used to adjust multimedia bandwidth requirements and defend against denial-of-service attacks; an efficient composition of link and transport-layer reliability mechanisms that increases the performance of TCP over lossy wireless links; and a distributed watermarking servicethat can efficiently deliver media flows marked with the identity of their recipients. In all three cases, similar functionality is provided to designs that do not maintain end-to-end security. Finally, we reconsider traditional end-to-end arguments in both networking and security, and show that they have continuing importance for Internet design. Our watermarking work adds the concept of splitting trust throughout a network to that model; we suggest further applications of this idea

    A Framework for Cyber Vulnerability Assessments of InfiniBand Networks

    Get PDF
    InfiniBand is a popular Input/Output interconnect technology used in High Performance Computing clusters. It is employed in over a quarter of the world’s 500 fastest computer systems. Although it was created to provide extremely low network latency with a high Quality of Service, the cybersecurity aspects of InfiniBand have yet to be thoroughly investigated. The InfiniBand Architecture was designed as a data center technology, logically separated from the Internet, so defensive mechanisms such as packet encryption were not implemented. Cyber communities do not appear to have taken an interest in InfiniBand, but that is likely to change as attackers branch out from traditional computing devices. This thesis considers the security implications of InfiniBand features and constructs a framework for conducting Cyber Vulnerability Assessments. Several attack primitives are tested and analyzed. Finally, new cyber tools and security devices for InfiniBand are proposed, and changes to existing products are recommended

    Attacking and securing Network Time Protocol

    Get PDF
    Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals: 1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time. 2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol. 3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time. We take the following approach to achieve our goals incrementally. 1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC). 2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers. 3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems. Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time

    Clinical foundations and information architecture for the implementation of a federated health record service

    Get PDF
    Clinical care increasingly requires healthcare professionals to access patient record information that may be distributed across multiple sites, held in a variety of paper and electronic formats, and represented as mixtures of narrative, structured, coded and multi-media entries. A longitudinal person-centred electronic health record (EHR) is a much-anticipated solution to this problem, but its realisation is proving to be a long and complex journey. This Thesis explores the history and evolution of clinical information systems, and establishes a set of clinical and ethico-legal requirements for a generic EHR server. A federation approach (FHR) to harmonising distributed heterogeneous electronic clinical databases is advocated as the basis for meeting these requirements. A set of information models and middleware services, needed to implement a Federated Health Record server, are then described, thereby supporting access by clinical applications to a distributed set of feeder systems holding patient record information. The overall information architecture thus defined provides a generic means of combining such feeder system data to create a virtual electronic health record. Active collaboration in a wide range of clinical contexts, across the whole of Europe, has been central to the evolution of the approach taken. A federated health record server based on this architecture has been implemented by the author and colleagues and deployed in a live clinical environment in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. This implementation experience has fed back into the conceptual development of the approach and has provided "proof-of-concept" verification of its completeness and practical utility. This research has benefited from collaboration with a wide range of healthcare sites, informatics organisations and industry across Europe though several EU Health Telematics projects: GEHR, Synapses, EHCR-SupA, SynEx, Medicate and 6WINIT. The information models published here have been placed in the public domain and have substantially contributed to two generations of CEN health informatics standards, including CEN TC/251 ENV 13606

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Radio Communications

    Get PDF
    In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modiïŹed our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the ïŹeld of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks

    Verifying the Behavior of Security-Impacting Programs

    Get PDF
    Verifying the behavior of systems can be a critical tool for early identification of security vulnerabilities and attacks. Whether the system is a single program, a set of identical devices running the same program, or a population of heterogeneous devices running different ensembles of programs, analyzing the behavior at scale enables early detection of security flaws and potential adversary behavior. In the first case of single program systems such as a cryptographic protocol implementation like OpenSSL, behavioral verification can detect attacks such as Heartbleed without pre-existing knowledge of the attack. The second case of multiple devices is typical of manufacturing settings, where thousands of identical devices can be running a critical program such as cryptographic key generation. Verifying statistical properties of output (keys) across the full set of devices can detect potentially catastrophic entropy flaws before deployment. Finally, in enterprise incident response settings, there can be a large, heterogeneous population of employee laptops running different ensembles of programs. In this case, correct behavior is rarely well-defined and requires evaluation by human analysts, a severely limited resource. Monitoring both the internal and output behavior of laptops at scale enables machine learning to approximate the intuition of a human analyst across a large dataset, and thereby facilitates early detection of malware and host compromise.Doctor of Philosoph

    Global connectivity architecture of mobile personal devices

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 193-207).The Internet's architecture, designed in the days of large, stationary computers tended by technically savvy and accountable administrators, fails to meet the demands of the emerging ubiquitous computing era. Nontechnical users now routinely own multiple personal devices, many of them mobile, and need to share information securely among them using interactive, delay-sensitive applications.Unmanaged Internet Architecture (UIA) is a novel, incrementally deployable network architecture for modern personal devices, which reconsiders three architectural cornerstones: naming, routing, and transport. UIA augments the Internet's global name system with a personal name system, enabling users to build personal administrative groups easily and intuitively, to establish secure bindings between his devices and with other users' devices, and to name his devices and his friends much like using a cell phone's address book. To connect personal devices reliably, even while mobile, behind NATs or firewalls, or connected via isolated ad hoc networks, UIA gives each device a persistent, location-independent identity, and builds an overlay routing service atop IP to resolve and route among these identities. Finally, to support today's interactive applications built using concurrent transactions and delay-sensitive media streams, UIA introduces a new structured stream transport abstraction, which solves the efficiency and responsiveness problems of TCP streams and the functionality limitations of UDP datagrams. Preliminary protocol designs and implementations demonstrate UIA's features and benefits. A personal naming prototype supports easy and portable group management, allowing use of personal names alongside global names in unmodified Internet applications. A prototype overlay router leverages the naming layer's social network to provide efficient ad hoc connectivity in restricted but important common-case scenarios.(cont) Simulations of more general routing protocols--one inspired by distributed hash tables, one based on recent compact routing theory--explore promising generalizations to UIA's overlay routing. A library-based prototype of UIA's structured stream transport enables incremental deployment in either OS infrastructure or applications, and demonstrates the responsiveness benefits of the new transport abstraction via dynamic prioritization of interactive web downloads. Finally, an exposition and experimental evaluation of NAT traversal techniques provides insight into routing optimizations useful in UIA and elsewhere.by Bryan Alexander Ford.Ph.D
    • 

    corecore