3,971 research outputs found

    Reasoning About the Reliability of Multi-version, Diverse Real-Time Systems

    Get PDF
    This paper is concerned with the development of reliable real-time systems for use in high integrity applications. It advocates the use of diverse replicated channels, but does not require the dependencies between the channels to be evaluated. Rather it develops and extends the approach of Little wood and Rush by (for general systems) by investigating a two channel system in which one channel, A, is produced to a high level of reliability (i.e. has a very low failure rate), while the other, B, employs various forms of static analysis to sustain an argument that it is perfect (i.e. it will never miss a deadline). The first channel is fully functional, the second contains a more restricted computational model and contains only the critical computations. Potential dependencies between the channels (and their verification) are evaluated in terms of aleatory and epistemic uncertainty. At the aleatory level the events ''A fails" and ''B is imperfect" are independent. Moreover, unlike the general case, independence at the epistemic level is also proposed for common forms of implementation and analysis for real-time systems and their temporal requirements (deadlines). As a result, a systematic approach is advocated that can be applied in a real engineering context to produce highly reliable real-time systems, and to support numerical claims about the level of reliability achieved

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Performability of Integrated Networked Control Systems

    Get PDF
    A direct sensor to actuator communication model (S2A) for unmodified Ethernet-based Networked Control Systems (NCSs) is presented in this research. A comparison is made between the S2A model and a previously introduced model including an in-loop controller node. OMNET simulations showed the success of the S2A model in meeting system delay with strict zero packet loss (with no over-delayed packets) requirements. The S2A model also showed a reduction in the end-to-end delay of control packets from sensor nodes to actuator nodes in both Fast and Gigabit switched Ethernet-Based. Another major improvement for the S2A model is accommodating the increase in the amount of additional load compared to the in-loop model. Two different controller-level fault-tolerant models for Ethernet-based Networked Control Systems (NCSs) are also presented in this research. These models are studied using unmodified Fast and Gigabit Ethernet. The first is an in-loop fault-tolerant controller model while the second is a fault-tolerant direct Sensor to Actuator (S2A) model. Both models were shown via OMNeT++ simulations to succeed in meeting system end-to-end delay with strict zero packet loss (with no over-delayed packets) requirements. Although, it was shown that the S2A model has a lower end-to-end delay than the in-loop controller model, the fault-tolerant in-loop model performs better than the fault-tolerant S2A model in terms of less total end-to-end delay in the fault-free situation. While, on the other hand, in the scenario with the failed controller(s), the S2A model was shown to have less total end-to-end delay. Performability analysis between the two fault-tolerant models is studied and compared using fast Ethernet links relating controller failure with reward, depending on the system state. Meeting control system\u27s deadline is essential in Networked Control Systems and failing to meet this deadline represents a failure of the system. Therefore, the reward is considered to be how far is the total end-to-end delay in each state in each model from the system deadline. A case study is presented that simultaneously investigates the failure on the controller level with reward

    Quantifying the Resiliency of Fail-Operational Real-Time Networked Control Systems

    Get PDF
    In time-sensitive, safety-critical systems that must be fail-operational, active replication is commonly used to mitigate transient faults that arise due to electromagnetic interference (EMI). However, designing an effective and well-performing active replication scheme is challenging since replication conflicts with the size, weight, power, and cost constraints of embedded applications. To enable a systematic and rigorous exploration of the resulting tradeoffs, we present an analysis to quantify the resiliency of fail-operational networked control systems against EMI-induced memory corruption, host crashes, and retransmission delays. Since control systems are typically robust to a few failed iterations, e.g., one missed actuation does not crash an inverted pendulum, traditional solutions based on hard real-time assumptions are often too pessimistic. Our analysis reduces this pessimism by modeling a control system\u27s inherent robustness as an (m,k)-firm specification. A case study with an active suspension workload indicates that the analytical bounds closely predict the failure rate estimates obtained through simulation, thereby enabling a meaningful design-space exploration, and also demonstrates the utility of the analysis in identifying non-trivial and non-obvious reliability tradeoffs

    Bounds on Worst-Case Deadline Failure Probabilities in Controller Area Networks

    Get PDF
    Industrial communication networks like the Controller Area Network (CAN) are often required to operate reliably in harsh environments which expose the communication network to random errors. Probabilistic schedulability analysis can employ rich stochastic error models to capture random error behaviors, but this is most often at the expense of increased analysis complexity. In this paper, an efficient method (of time complexity O(n log n)) to bound the message deadline failure probabilities for an industrial CAN network consisting of n periodic/sporadic message transmissions is proposed. The paper develops bounds for Deadline Minus Jitter Monotonic (DMJM) and Earliest Deadline First (EDF) message scheduling techniques. Both random errors and random bursts of errors can be included in the model. Stochastic simulations and a case study considering DMJM and EDF scheduling of an automotive benchmark message set provide validation of the technique and highlight its application

    Controller Area Network

    Get PDF
    Controller Area Network (CAN) is a popular and very well-known bus system, both in academia and in industry. CAN protocol was introduced in the mid eighties by Robert Bosch GmbH [7] and it was internationally standardized in 1993 as ISO 11898-1 [24]. It was initially designed to distributed automotive control systems, as a single digital bus to replace traditional point-to-point cables that were growing in complexity, weight and cost with the introduction of new electrical and electronic systems. Nowadays CAN is still used extensively in automotive applications, with an excess of 400 million CAN enabled microcontrollers manufactured each year [14]. The widespread and successful use of CAN in the automotive industry, the low cost asso- ciated with high volume production of controllers and CAN's inherent technical merit, have driven to CAN adoption in other application domains such as: industrial communications, medical equipment, machine tool, robotics and in distributed embedded systems in general. CAN provides two layers of the stack of the Open Systems Interconnection (OSI) reference model: the physical layer and the data link layer. Optionally, it could also provide an additional application layer, not included on the CAN standard. Notice that CAN physical layer was not dened in Bosch original specication, only the data link layer was dened. However, the CAN ISO specication lled this gap and the physical layer was then fully specied. CAN is a message-oriented transmission protocol, i.e., it denes message contents rather than nodes and node addresses. Every message has an associated message identier, which is unique within the whole network, dening both the content and the priority of the message. Transmission rates are dened up to 1 Mbps. The large installed base of CAN nodes with low failure rates over almost two decades, led to the use of CAN in some critical applications such as Anti-locking Brake Systems (ABS) and Electronic Stability Program (ESP) in cars. In parallel with the wide dissemination of CAN in industry, the academia also devoted a large eort to CAN analysis and research, making CAN one of the must studied eldbuses. That is why a large number of books or book chapters describing CAN were published. The rst CAN book, written in French by D. Paret, was published in 1997 and presents the CAN basics [32]. More implementation oriented approaches, including CAN node implementation and application examples, can be found in Lorenz [28] and in Etschberger [16], while more compact descriptions of CAN can be found in [11] and in some chapters of [31]. Despite its success story, CAN application designers would be happier if CAN could be made faster, cover longer distances, be more deterministic and more dependable [34]. Over the years, several protocols based in CAN were presented, taking advantage of some CAN properties and trying to improve some known CAN drawbacks. This chapter, besides presenting an overview of CAN, describes also some other relevant higher level protocols based on CAN, such as CANopen [13], DeviceNet [6], FTT-CAN [1] and TTCAN [25]

    Soft real-time communications over Bluetooth under interferences from ISM devices

    Get PDF
    Bluetooth is a suitable technology to support soft real-time applications like multimedia streams at the personal area network level. In this paper, we analytically evaluate the worst-case deadline failure probability of Bluetooth packets under co-channel interference as a way to provide statistical guarantees when transmitting soft real-time traffic using ACL links. We consider the interference from independent Bluetooth devices, as well as from other devices operating in the ISM band like 802.11b/g and Zigbee. Finally, we show as an example how to use our model to obtain some results for the transmission of a voice stream.Ministerio de Ciencia y Tecnología TIC2001-1868-C03-0

    Dual protocol performance using WiFi and ZigBee for industrial WLAN

    Get PDF
    The purpose of this thesis is to study the performance of a WNCS based on utilizing IEEE 802.15.4 and IEEE 802.11 in meeting industrial requirements as well as the extent of improvement on the network level in terms of latency and interference tolerance when using the two different protocols, namely WiFi and ZigBee, in parallel. The study evaluates the optimum performance of WNCS that utilizes only IEEE 802.15.4 protocol (which ZigBee is based on) without modifications as an alternative that is low cost and low power compared to other wireless technologies. The study also evaluates the optimum performance of WNCS that utilizes only the IEEE 802.11 protocol (WiFi) without modifications as a high bit network. OMNeT++ simulations are used to measure the end-to-end delay and packet loss from the sensors to the controller and from the controller to the actuators. It is demonstrated that the measured delay of the proposed WNCS including all types of transmission, encapsulation, de-capsulation, queuing and propagation, meet real-time control network requirements while guaranteeing correct packet reception with no packet loss. Moreover, it is shown that the demonstrated performance of the proposed WNCS operating redundantly on both networks in parallel is significantly superior to a WNCS operating on either a totally wireless ZigBee or WiFi network individually in terms of measured delay and interference tolerance. This proposed WNCS demonstrates the combined advantages of both the IEEE 802.15.4 protocol (which ZigBee is based on) without modifications being low cost and low power compared to other wireless technologies as well the advantages of the IEEE 802.11 protocol (WiFi) being increased bit rate and higher immunity to interference. All results presented in this study were based on a 95% confidence analysis
    corecore