70 research outputs found
Survivability modeling for cyber-physical systems subject to data corruption
Cyber-physical critical infrastructures are created when traditional physical infrastructure is supplemented with advanced monitoring, control, computing, and communication capability. More intelligent decision support and improved efficacy, dependability, and security are expected. Quantitative models and evaluation methods are required for determining the extent to which a cyber-physical infrastructure improves on its physical predecessors. It is essential that these models reflect both cyber and physical aspects of operation and failure. In this dissertation, we propose quantitative models for dependability attributes, in particular, survivability, of cyber-physical systems. Any malfunction or security breach, whether cyber or physical, that causes the system operation to depart from specifications will affect these dependability attributes. Our focus is on data corruption, which compromises decision support -- the fundamental role played by cyber infrastructure. The first research contribution of this work is a Petri net model for information exchange in cyber-physical systems, which facilitates i) evaluation of the extent of data corruption at a given time, and ii) illuminates the service degradation caused by propagation of corrupt data through the cyber infrastructure. In the second research contribution, we propose metrics and an evaluation method for survivability, which captures the extent of functionality retained by a system after a disruptive event. We illustrate the application of our methods through case studies on smart grids, intelligent water distribution networks, and intelligent transportation systems. Data, cyber infrastructure, and intelligent control are part and parcel of nearly every critical infrastructure that underpins daily life in developed countries. Our work provides means for quantifying and predicting the service degradation caused when cyber infrastructure fails to serve its intended purpose. It can also serve as the foundation for efforts to fortify critical systems and mitigate inevitable failures --Abstract, page iii
A framework for dependability evaluation of PROFIBUS networks
Fieldbus networks have been assuming a high
acceptance in the industrial environment, replacing the old centralized control architectures. Due to time critical nature
of the tasks involved in these environments, the fulfillment of dependability attributes is usually required. Therefore the dependability is an important parameter on system design, which should be evaluated.
Several factors can affect system dependability. The environmental ones are the most common and due to the particularity
of the industrial environment this susceptibility is increased. In this paper it is proposed a framework based on fault injection techniques, supported by a hardware platform which emulates a fault set, representative of industrial environment
scenarios, intending to disturb data communications on a PROFIBUS network. From these fault injection experiments, relevant data is gathered and a further analysis is carried out to evaluate dependability attributes
Applications of Bayesian networks and Petri nets in safety, reliability, and risk assessments: A review
YesSystem safety, reliability and risk analysis are important tasks that are performed throughout the system lifecycle to ensure the dependability of safety-critical systems. Probabilistic risk assessment (PRA) approaches
are comprehensive, structured and logical methods widely used for this purpose. PRA approaches include,
but not limited to, Fault Tree Analysis (FTA), Failure Mode and Effects Analysis (FMEA), and Event
Tree Analysis (ETA). Growing complexity of modern systems and their capability of behaving dynamically
make it challenging for classical PRA techniques to analyse such systems accurately. For a comprehensive
and accurate analysis of complex systems, different characteristics such as functional dependencies among
components, temporal behaviour of systems, multiple failure modes/states for components/systems, and
uncertainty in system behaviour and failure data are needed to be considered. Unfortunately, classical
approaches are not capable of accounting for these aspects. Bayesian networks (BNs) have gained popularity
in risk assessment applications due to their flexible structure and capability of incorporating most of the
above mentioned aspects during analysis. Furthermore, BNs have the ability to perform diagnostic analysis.
Petri Nets are another formal graphical and mathematical tool capable of modelling and analysing dynamic
behaviour of systems. They are also increasingly used for system safety, reliability and risk evaluation. This
paper presents a review of the applications of Bayesian networks and Petri nets in system safety, reliability
and risk assessments. The review highlights the potential usefulness of the BN and PN based approaches over
other classical approaches, and relative strengths and weaknesses in different practical application scenarios.This work was funded by the DEIS H2020 project (Grant Agreement 732242)
Executable Architectures Using Cuckoo Search Optimization Coupled with OPM and CPN-A Module: A New Meta-Architecture Model for FILA SoS
Understanding System of Systems (SoS) requires novel ways to apply systems engineering processes. Acknowledged SoS have recognized objectives, a designated manager and resources for the SoS. The goal of this research is to develop a proof of concept tool suite for Acknowledged SoS systems simulation. This suite is named flexible, intelligent and learning architectures for System of Systems (FILA-SoS). FILA-SoS assists the SoS manager in architecture generation, selection, and implementation working as an aid for decision making. Binary cuckoo search constrained optimization is used to generate meta-architectures which are evaluated by a fuzzy assessor for quality assurance. The architecture is then converted into an executable structure using Object Process Methodology (OPM) and Colored Petri Nets (CPN). A hybrid methodology comprising of OPM and CPN approach is implemented for simulating the acquisition environment. Initial application for a Search and Rescue (SAR) SoS, consisting of 25 individual systems with ten capabilities gave promising results
Performability of Integrated Networked Control Systems
A direct sensor to actuator communication model (S2A) for unmodified Ethernet-based Networked Control Systems (NCSs) is presented in this research. A comparison is made between the S2A model and a previously introduced model including an in-loop controller node. OMNET simulations showed the success of the S2A model in meeting system delay with strict zero packet loss (with no over-delayed packets) requirements. The S2A model also showed a reduction in the end-to-end delay of control packets from sensor nodes to actuator nodes in both Fast and Gigabit switched Ethernet-Based. Another major improvement for the S2A model is accommodating the increase in the amount of additional load compared to the in-loop model. Two different controller-level fault-tolerant models for Ethernet-based Networked Control Systems (NCSs) are also presented in this research. These models are studied using unmodified Fast and Gigabit Ethernet. The first is an in-loop fault-tolerant controller model while the second is a fault-tolerant direct Sensor to Actuator (S2A) model. Both models were shown via OMNeT++ simulations to succeed in meeting system end-to-end delay with strict zero packet loss (with no over-delayed packets) requirements. Although, it was shown that the S2A model has a lower end-to-end delay than the in-loop controller model, the fault-tolerant in-loop model performs better than the fault-tolerant S2A model in terms of less total end-to-end delay in the fault-free situation. While, on the other hand, in the scenario with the failed controller(s), the S2A model was shown to have less total end-to-end delay. Performability analysis between the two fault-tolerant models is studied and compared using fast Ethernet links relating controller failure with reward, depending on the system state. Meeting control system\u27s deadline is essential in Networked Control Systems and failing to meet this deadline represents a failure of the system. Therefore, the reward is considered to be how far is the total end-to-end delay in each state in each model from the system deadline. A case study is presented that simultaneously investigates the failure on the controller level with reward
Recommended from our members
Performance and Security Trade-offs in High-Speed Networks. An investigation into the performance and security modelling and evaluation of high-speed networks based on the quantitative analysis and experimentation of queueing networks and generalised stochastic Petri nets.
Most used security mechanisms in high-speed networks have been adopted without adequate quantification of their impact on performance degradation. Appropriate quantitative network models may be employed for the evaluation and prediction of ¿optimal¿ performance vs. security trade-offs. Several quantitative models introduced in the literature are based on queueing networks (QNs) and generalised stochastic Petri nets (GSPNs). However, these models do not take into consideration Performance Engineering Principles (PEPs) and the adverse impact of traffic burstiness and security protocols on performance.
The contributions of this thesis are based on the development of an effective quantitative methodology for the analysis of arbitrary QN models and GSPNs through discrete-event simulation (DES) and extended applications into performance vs. security trade-offs involving infrastructure and infrastructure-less high-speed networks under bursty traffic conditions. Specifically, investigations are carried out focusing, for illustration purposes, on high-speed network routers subject to Access Control List (ACL) and also Robotic Ad Hoc Networks (RANETs) with Wired Equivalent Privacy (WEP) and Selective Security (SS) protocols, respectively. The Generalised Exponential (GE) distribution is used to model inter-arrival and service times at each node in order to capture the traffic burstiness of the network and predict pessimistic ¿upper bounds¿ of network performance.
In the context of a router with ACL mechanism representing an infrastructure network node, performance degradation is caused due to high-speed incoming traffic in conjunction with ACL security computations making the router a bottleneck in the network. To quantify and predict the trade-off of this degradation, the proposed quantitative methodology employs a suitable QN model consisting of two queues connected in a tandem configuration. These queues have single or quad-core CPUs with multiple-classes and correspond to a security processing node and a transmission forwarding node. First-Come-First-Served (FCFS) and Head-of-the-Line (HoL) are the adopted service disciplines together with Complete Buffer Sharing (CBS) and Partial Buffer Sharing (PBS) buffer management schemes. The mean response time and packet loss probability at each queue are employed as typical performance metrics. Numerical experiments are carried out, based on DES, in order to establish a balanced trade-off between security and performance towards the design and development of efficient router architectures under bursty traffic conditions.
The proposed methodology is also applied into the evaluation of performance vs. security trade-offs of robotic ad hoc networks (RANETs) with mobility subject to Wired Equivalent Privacy (WEP) and Selective Security (SS) protocols. WEP protocol is engaged to provide confidentiality and integrity to exchanged data amongst robotic nodes of a RANET and thus, to prevent data capturing by unauthorised users. WEP security mechanisms in RANETs, as infrastructure-less networks, are performed at each individual robotic node subject to traffic burstiness as well as nodal mobility. In this context, the proposed quantitative methodology is extended to incorporate an open QN model of a RANET with Gated queues (G-Queues), arbitrary topology and multiple classes of data packets with FCFS and HoL disciplines under bursty arrival traffic flows characterised by an Interrupted Compound Poisson Process (ICPP). SS is included in the Gated-QN (G-QN) model in order to establish an ¿optimal¿ performance vs. security trade-off. For this purpose, PEPs, such as the provision of multiple classes with HoL priorities and the availability of dual CPUs, are complemented by the inclusion of robot¿s mobility, enabling realistic decisions in mitigating the performance of mobile robotic nodes in the presence of security. The mean marginal end-to-end delay was adopted as the performance metric that gives indication on the security improvement.
The proposed quantitative methodology is further enhanced by formulating an advanced hybrid framework for capturing ¿optimal¿ performance vs. security trade-offs for each node of a RANET by taking more explicitly into consideration security control and battery life. Specifically, each robotic node is represented by a hybrid Gated GSPN (G-GSPN) and a QN model. In this context, the G-GSPN incorporates bursty multiple class traffic flows, nodal mobility, security processing and control whilst the QN model has, generally, an arbitrary configuration with finite capacity channel queues reflecting ¿intra¿-robot (component-to-component) communication and ¿inter¿-robot transmissions. Two theoretical case studies from the literature are adapted to illustrate the utility of the QN towards modelling ¿intra¿ and ¿inter¿ robot communications. Extensions of the combined performance and security metrics (CPSMs) proposed in the literature are suggested to facilitate investigating and optimising RANET¿s performance vs. security trade-offs.
This framework has a promising potential modelling more meaningfully and explicitly the behaviour of security processing and control mechanisms as well as capturing the robot¿s heterogeneity (in terms of the robot architecture and application/task context) in the near future (c.f. [1]. Moreover, this framework should enable testing robot¿s configurations during design and development stages of RANETs as well as modifying and tuning existing configurations of RANETs towards enhanced ¿optimal¿ performance and security trade-offs.Ministry of Higher Education in Libya and the Libyan Cultural Attaché bureau in Londo
Addressing Complexity and Intelligence in Systems Dependability Evaluation
Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work
Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World
This is the first deliverable of WP5, which covers Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World. As described in the project DOW, in this document we cover the following topics: • Metrics definition • Identification of limitations of current V&V approaches and exploration of extensions/refinements/ new developments • Identification of security, privacy and trust models WP5 focus is on dependability concerning the peculiar aspects of the project, i.e., the threats deriving from on-the-fly synthesis of CONNECTors. We explore appropriate means for assessing/guaranteeing that the CONNECTed System yields acceptable levels for non-functional properties, such as reliability (e.g., the CONNECTor will ensure continued communication without interruption), security and privacy (e.g., the transactions do not disclose confidential data), trust (e.g., Networked Systems are put in communication only with parties they trust). After defining a conceptual framework for metrics definition, we present the approaches to dependability in CONNECT, which cover: i) Model-based V&V, ii) Security enforcement and iii) Trust management. The approaches are centered around monitoring, to allow for on-line analysis. Monitoring is performed alongside the functionalities of the CONNECTed System and is used to detect conditions that are deemed relevant by its clients (i.e., the other CONNECT Enablers). A unified lifecycle encompassing dependability analysis, security enforcement and trust management is outlined, spanning over discovery time, synthesis time and execution time
A multi-objective flexible manufacturing system design optimization using a hybrid response surface methodology
The present study proposes a hybrid framework combining multiple methods to determine the optimal values of design variables in a flexible manufacturing system (FMS). The framework uses a multi-objective response surface methodology (RSM) to achieve optimum performance. The performance of an FMS is characterized using various weighted measures using the best-worst method (BWM). Subsequently, an RSM approximates the functional relationship between the FMS performance and design variables. The central composite design (CCD) is used for this aim, and a polynomial regression model is fitted among the factors. Eventually, a bi-objective model, including the fitted and cost functions, is formulated and solved. As a result, the optimal percentage for deploying the FMS equipment and machines to achieve optimal performance with the lowest deployment cost is determined. The proposed framework can serve as a guideline for manufacturing organizations to lead strategic decisions regarding the design problems of FMSs. It significantly increases productivity for the manufacturing system, reduces redundant labor and material handling costs, and facilitates productio
- …