233 research outputs found

    Data mining based cyber-attack detection

    Get PDF

    Attack graph approach to dynamic network vulnerability analysis and countermeasures

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIt is widely accepted that modern computer networks (often presented as a heterogeneous collection of functioning organisations, applications, software, and hardware) contain vulnerabilities. This research proposes a new methodology to compute a dynamic severity cost for each state. Here a state refers to the behaviour of a system during an attack; an example of a state is where an attacker could influence the information on an application to alter the credentials. This is performed by utilising a modified variant of the Common Vulnerability Scoring System (CVSS), referred to as a Dynamic Vulnerability Scoring System (DVSS). This calculates scores of intrinsic, time-based, and ecological metrics by combining related sub-scores and modelling the problem’s parameters into a mathematical framework to develop a unique severity cost. The individual static nature of CVSS affects the scoring value, so the author has adapted a novel model to produce a DVSS metric that is more precise and efficient. In this approach, different parameters are used to compute the final scores determined from a number of parameters including network architecture, device setting, and the impact of vulnerability interactions. An attack graph (AG) is a security model representing the chains of vulnerability exploits in a network. A number of researchers have acknowledged the attack graph visual complexity and a lack of in-depth understanding. Current attack graph tools are constrained to only limited attributes or even rely on hand-generated input. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions are frequently created by hand, or based on limited data. The network architectures and configurations along with the interactions between the individual vulnerabilities are considered in the method of computing the Cost using the DVSS and a dynamic cost-centric framework. A new methodology was built up to present an attack graph with a dynamic cost metric based on DVSS and also a novel methodology to estimate and represent the cost-centric approach for each host’ states was followed out. A framework is carried out on a test network, using the Nessus scanner to detect known vulnerabilities, implement these results and to build and represent the dynamic cost centric attack graph using ranking algorithms (in a standardised fashion to Mehta et al. 2006 and Kijsanayothin, 2010). However, instead of using vulnerabilities for each host, a CostRank Markov Model has developed utilising a novel cost-centric approach, thereby reducing the complexity in the attack graph and reducing the problem of visibility. An analogous parallel algorithm is developed to implement CostRank. The reason for developing a parallel CostRank Algorithm is to expedite the states ranking calculations for the increasing number of hosts and/or vulnerabilities. In the same way, the author intends to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions of arcs (representing an action to move from one state to another). In this proposed approach, the focus on a parallel CostRank computational architecture to appraise the enhancement in CostRank calculations and scalability of of the algorithm. In particular, a partitioning of input data, graph files and ranking vectors with a load balancing technique can enhance the performance and scalability of CostRank computations in parallel. A practical model of analogous CostRank parallel calculation is undertaken, resulting in a substantial decrease in calculations communication levels and in iteration time. The results are presented in an analytical approach in terms of scalability, efficiency, memory usage, speed up and input/output rates. Finally, a countermeasures model is developed to protect against network attacks by using a Dynamic Countermeasures Attack Tree (DCAT). The following scheme is used to build DCAT tree (i) using scalable parallel CostRank Algorithm to determine the critical asset, that system administrators need to protect; (ii) Track the Nessus scanner to determine the vulnerabilities associated with the asset using the dynamic cost centric framework and DVSS; (iii) Check out all published mitigations for all vulnerabilities. (iv) Assess how well the security solution mitigates those risks; (v) Assess DCAT algorithm in terms of effective security cost, probability and cost/benefit analysis to reduce the total impact of a specific vulnerability

    OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective

    Get PDF
    Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack. Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments. In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods. In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems. In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real-time capabilities of the hardware by design, while guaranteeing strict isolation across domains.Computergestützte Automatisierung industrieller Systeme führt zu einer wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben erschwert werden. Moderne Prozessoren stellen Mittel zur Verfügung, welche es ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische Herausforderungen, die den gesamten Software und Systemstapel betreffen. Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert architektonische, als auch ingenieurstechnische Strategien um die Anforderungen (etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen Umgebungen erfüllen zu können. Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament für Zertifizierungsvorhaben basierend auf empirischen Methoden des Software Engineerings zur Verfügung zu stellen. In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es ermöglichen, prozessuale Charakteristika von verteilten OSS Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rückschauende Analysen des Entwicklungsprozess als Grundlage für Softwarezertifizierungsvorhaben genutzt werden können. Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von Systemen gemischter Kritikalität auf einer alleinstehenden Plattform ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung zwischen Domänen stets sicher gestellt ist

    Data Mining and Machine Learning for Software Engineering

    Get PDF
    Software engineering is one of the most utilizable research areas for data mining. Developers have attempted to improve software quality by mining and analyzing software data. In any phase of software development life cycle (SDLC), while huge amount of data is produced, some design, security, or software problems may occur. In the early phases of software development, analyzing software data helps to handle these problems and lead to more accurate and timely delivery of software projects. Various data mining and machine learning studies have been conducted to deal with software engineering tasks such as defect prediction, effort estimation, etc. This study shows the open issues and presents related solutions and recommendations in software engineering, applying data mining and machine learning techniques

    An approach towards standardising vulnerability categories

    Get PDF
    Computer vulnerabilities are design flaws, implementation or configuration errors that provide a means of exploiting a system or network that would not be available otherwise. The recent growth in the number of vulnerability scanning (VS) tools and independent vulnerability databases points to an apparent need for further means to protect computer systems from compromise. It is important for these tools and databases to interpret, correlate and exchange a large amount of information about computer vulnerabilities in order to use them effectively. However, this goal is hard to achieve because the current VS products differ extensively both in the way that they can detect vulnerabilities and in the number of vulnerabilities that they can detect. Each tool or database represents, identifies and classifies vulnerabilities in its own way, thus making them difficult to study and compare. Although the list of Common Vulnerabilities and Exposures (CVE) provides a means of solving the disparity in vulnerability names used in the different VS products, it does not standardise vulnerability categories. This dissertation highlights the importance of having a standard vulnerability category set and outlines an approach towards achieving this goal by categorising the CVE repository using a data-clustering algorithm. Prototypes are presented to verify the concept of standardizing vulnerability categories and how this can be used as the basis for comparison of VS products and improving scan reports.Dissertation (MSc (Computer Science))--University of Pretoria, 2008.Computer Scienceunrestricte

    Securing the software-defined networking control plane by using control and data dependency techniques

    Get PDF
    Software-defined networking (SDN) fundamentally changes how network and security practitioners design, implement, and manage their networks. SDN decouples the decision-making about traffic forwarding (i.e., the control plane) from the traffic being forwarded (i.e., the data plane). SDN also allows for network applications, or apps, to programmatically control network forwarding behavior and policy through a logically centralized control plane orchestrated by a set of SDN controllers. As a result of logical centralization, SDN controllers act as network operating systems in the coordination of shared data plane resources and comprehensive security policy implementation. SDN can support network security through the provision of security services and the assurances of policy enforcement. However, SDN’s programmability means that a network’s security considerations are different from those of traditional networks. For instance, an adversary who manipulates the programmable control plane can leverage significant control over the data plane’s behavior. In this dissertation, we demonstrate that the security posture of SDN can be enhanced using control and data dependency techniques that track information flow and enable understanding of application composability, control and data plane decoupling, and control plane insight. We support that statement through investigation of the various ways in which an attacker can use control flow and data flow dependencies to influence the SDN control plane under different threat models. We systematically explore and evaluate the SDN security posture through a combination of runtime, pre-runtime, and post-runtime contributions in both attack development and defense designs. We begin with the development a conceptual accountability framework for SDN. We analyze the extent to which various entities within SDN are accountable to each other, what they are accountable for, mechanisms for assurance about accountability, standards by which accountability is judged, and the consequences of breaching accountability. We discover significant research gaps in SDN’s accountability that impact SDN’s security posture. In particular, the results of applying the accountability framework showed that more control plane attribution is necessary at different layers of abstraction, and that insight motivated the remaining work in this dissertation. Next, we explore the influence of apps in the SDN control plane’s secure operation. We find that existing access control protections that limit what apps can do, such as role-based access controls, prove to be insufficient for preventing malicious apps from damaging control plane operations. The reason is SDN’s reliance on shared network state. We analyze SDN’s shared state model to discover that benign apps can be tricked into acting as “confused deputies”; malicious apps can poison the state used by benign apps, and that leads the benign apps to make decisions that negatively affect the network. That violates an implicit (but unenforced) integrity policy that governs the network’s security. Because of the strong interdependencies among apps that result from SDN’s shared state model, we show that apps can be easily co-opted as “gadgets,” and that allows an attacker who minimally controls one app to make changes to the network state beyond his or her originally granted permissions. We use a data provenance approach to track the lineage of the network state objects by assigning attribution to the set of processes and agents responsible for each control plane object. We design the ProvSDN tool to track API requests from apps as they access the shared network state’s objects, and to check requests against a predefined integrity policy to ensure that low-integrity apps cannot poison high-integrity apps. ProvSDN acts as both a reference monitor and an information flow control enforcement mechanism. Motivated by the strong inter-app dependencies, we investigate whether implicit data plane dependencies affect the control plane’s secure operation too. We find that data plane hosts typically have an outsized effect on the generation of the network state in reactive-based control plane designs. We also find that SDN’s event-based design, and the apps that subscribe to events, can induce dependencies that originate in the data plane and that eventually change forwarding behaviors. That combination gives attackers that are residing on data plane hosts significant opportunities to influence control plane decisions without having to compromise the SDN controller or apps. We design the EventScope tool to automatically identify where such vulnerabilities occur. EventScope clusters apps’ event usage to decide in which cases unhandled events should be handled, statically analyzes controller and app code to understand how events affect control plane execution, and identifies valid control flow paths in which a data plane attacker can reach vulnerable code to cause unintended data plane changes. We use EventScope to discover 14 new vulnerabilities, and we develop exploits that show how such vulnerabilities could allow an attacker to bypass an intended network (i.e., data plane) access control policy. This research direction is critical for SDN security evaluation because such vulnerabilities could be induced by host-based malware campaigns. Finally, although there are classes of vulnerabilities that can be removed prior to deployment, it is inevitable that other classes of attacks will occur that cannot be accounted for ahead of time. In those cases, a network or security practitioner would need to have the right amount of after-the-fact insight to diagnose the root causes of such attacks without being inundated with too much informa- tion. Challenges remain in 1) the modeling of apps and objects, which can lead to overestimation or underestimation of causal dependencies; and 2) the omission of a data plane model that causally links control and data plane activities. We design the PicoSDN tool to mitigate causal dependency modeling challenges, to account for a data plane model through the use of the data plane topology to link activities in the provenance graph, and to account for network semantics to appropriately query and summarize the control plane’s history. We show how prior work can hinder investigations and analysis in SDN-based attacks and demonstrate how PicoSDN can track SDN control plane attacks.Ope
    • …
    corecore