69 research outputs found
Towards a standardised attack graph visual syntax
More research needs to focus on developing effective methods of aiding the understanding and perception of cyber-attacks. Attack modelling techniques (AMTs) - such as attack graphs, attack trees and fault trees, are popular methods of mathematically and visually representing the sequence of events that lead to a successful cyber-attack. Although useful in aiding cyber-attack perception, there is little empirical or comparative research which evaluates the effectiveness of these methods. Furthermore, there is no standardised attack graph visual syntax configuration, currently more than seventy-five self-nominated attack graph and twenty attack tree configurations have been described in the literature - each of which presents attributes such as preconditions and exploits in a different way.
This research analyses methods of presenting cyber-attacks and reveals that attack graphs and attack trees are the dominant methods. The research proposes an attack graph visual syntax which is designed using evidence based principles.
The proposed attack graph is compared with the fault tree - which is a standard method of representing events such as cyber-attacks. This comparison shows that the proposed attack graph visual syntax is more effective than the fault tree method at aiding cyber-attack perception and that the attack graph can be an effective tool for aiding cyber-attack perception - particularly in educational contexts.
Although the proposed attack graph visual syntax is shown to be cognitively effective, this is no indication of practitioner acceptance. The research proceeds to identify a preferred attack graph visual syntax from a range of visual syntaxes - one of which is the proposed attack graph visual syntax. The method used to perform the comparison is conjoint analysis which is innovative for this field.
The results of the second study reveal that the proposed attack graph visual syntax is one of the preferred configurations. This attack graph has the following attributes. The flow of events is represented top-down, preconditions are represented as rectangles, and exploits are represented as ellipses.
The key contribution of this research is the development of an attack graph visual syntax which is effective in aiding the understanding of cyber-attacks particularly in educational contexts. The proposed method is a significant step towards standardising the attack graph visual syntax
The DS-Pnet modeling formalism for cyber-physical system development
This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models.
Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems.
A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators.
The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript).
A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs.
Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications
Advances in Robotics, Automation and Control
The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man
Recommended from our members
Validating digital forensic evidence
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This dissertation focuses on the forensic validation of computer evidence. It is a
burgeoning field, by necessity, and there have been significant advances in the detection and gathering of evidence related to electronic crimes. What makes the computer
forensics field similar to other forensic fields is that considerable emphasis is placed on the validity of the digital evidence. It is not just the methods used to collect the evidence that is a concern. What is also a problem is that perpetrators of digital crimes may be engaged in what is called anti-forensics. Digital forensic evidence techniques are deliberately thwarted and corrupted by those under investigation. In traditional forensics
the link between evidence and perpetrator's actions is often straightforward: a fingerprint on an object indicates that someone has touched the object. Anti-forensic activity would be the equivalent of having the ability to change the nature of the fingerprint before, or during the investigation, thus making the forensic evidence collected invalid or less
reliable. This thesis reviews the existing security models and digital forensics, paying
particular attention to anti-forensic activity that affects the validity of data collected in the form of digital evidence. This thesis will build on the current models in this field and suggest a tentative first step model to manage and detect possibility of anti-forensic activity. The model is concerned with stopping anti-forensic activity, and thus is not a forensic model in the normal sense, it is what will be called a “meta-forensic” model. A
meta-forensic approach is an approach intended to stop attempts to invalidate digital forensic evidence. This thesis proposes a formal procedure and guides forensic examiners to look at evidence in a meta-forensic way
A language and toolkit for the specification, execution and monitoring of dependable distributed applications
PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications
out of existing applications, possibly legacy ones. With the automation of business processes
on the increase, more and more applications of this kind are being constructed. The resulting
applications can be quite complex, usually long-lived and are executed in a heterogeneous
environment. In a distributed environment, long-lived activities need support for fault tolerance
and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will
change (nodes may fail, services may be moved elsewhere or withdrawn) during their
execution and the specification will have to be modified. There is also a need for modularity,
scalability and openness. However, most of the existing systems only consider part of these
requirements. A new area of research, called workflow management has been trying to address
these issues.
This work first looks at what needs to be addressed to support the specification and
execution of these new applications in a heterogeneous, distributed environment. A co-
ordination language (scripting language) is developed that fulfils the requirements of specifying
the composition and inter-dependencies of distributed applications with the properties of
dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture
of the overall workflow system and its implementation are then presented. The system has been
implemented as a set of CORBA services and the execution environment is built using a
transactional workflow management system. Next, the thesis describes the design of a toolkit
to specify, execute and monitor distributed applications. The design of the co-ordination
language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council,
CaberNet,
Northern Telecom (Nortel)
Modeling and Prediction of I/O Performance in Virtualized Environments
We present a novel performance modeling approach tailored to I/O performance prediction in virtualized environments. The main idea is to identify important performance-influencing factors and to develop storage-level I/O performance models. To increase the practical applicability of these models, we combine the low-level I/O performance models with high-level software architecture models. Our approach is validated in a variety of case studies in state-of-the-art, real-world environments
Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12
This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc
An Insider Misuse Threat Detection and Prediction Language
Numerous studies indicate that amongst the various types of security threats, the
problem of insider misuse of IT systems can have serious consequences for the health
of computing infrastructures. Although incidents of external origin are also dangerous,
the insider IT misuse problem is difficult to address for a number of reasons. A
fundamental reason that makes the problem mitigation difficult relates to the level of
trust legitimate users possess inside the organization. The trust factor makes it difficult
to detect threats originating from the actions and credentials of individual users. An
equally important difficulty in the process of mitigating insider IT threats is based on
the variability of the problem. The nature of Insider IT misuse varies amongst
organizations. Hence, the problem of expressing what constitutes a threat, as well as
the process of detecting and predicting it are non trivial tasks that add up to the multi-
factorial nature of insider IT misuse.
This thesis is concerned with the process of systematizing the specification of insider
threats, focusing on their system-level detection and prediction. The design of suitable
user audit mechanisms and semantics form a Domain Specific Language to detect and
predict insider misuse incidents. As a result, the thesis proposes in detail ways to
construct standardized descriptions (signatures) of insider threat incidents, as means
of aiding researchers and IT system experts mitigate the problem of insider IT misuse.
The produced audit engine (LUARM – Logging User Actions in Relational Mode) and
the Insider Threat Prediction and Specification Language (ITPSL) are two utilities that
can be added to the IT insider misuse mitigation arsenal. LUARM is a novel audit
engine designed specifically to address the needs of monitoring insider actions. These
needs cannot be met by traditional open source audit utilities. ITPSL is an XML based
markup that can standardize the description of incidents and threats and thus make use
of the LUARM audit data. Its novelty lies on the fact that it can be used to detect as
well as predict instances of threats, a task that has not been achieved to this date by a
domain specific language to address threats.
The research project evaluated the produced language using a cyber-misuse
experiment approach derived from real world misuse incident data. The results of the
experiment showed that the ITPSL and its associated audit engine LUARM
provide a good foundation for insider threat specification and prediction. Some
language deficiencies relate to the fact that the insider threat specification process
requires a good knowledge of the software applications used in a computer system. As
the language is easily expandable, future developments to improve the language
towards this direction are suggested
- …