121 research outputs found

    Issues of using wireless sensor network to monitor urban air quality

    Get PDF
    Frequent monitoring of urban environment has now been regulated in most EU countries. Due to the design and cost of high-quality sensors, the current approach using these sensors may not provide data with an appropriate spatial and temporal resolution. As a result, using a wireless sensor network constructed by a large number of low-cost sensors is becoming increasingly popular to support the monitoring of urban environments. However, in practice, there are many issues that prevent such networks to be widely adopted. In this paper, we use data and lessons learnt from three real deployments to illustrate those issues. The issues are classified into three main categories and discussed according to the different sensing stages. In the end, we summarise a list of open challenges which we believe are significant for the future research

    Using Safety Contracts to Guide the Maintenance of Systems and Safety Cases

    Get PDF
    Changes to safety critical systems are inevitable and can impact the safety confidence about a system as their effects can refute articulated claims about safety or challenge the supporting evidence on which this confidence relies. In order to maintain the safety confidence under changes, system developers need to re-analyse and re-verify the system to generate new valid items of evidence. Identifying the effects of a particular change is a crucial step in any change management process as it enables system developers to estimate the required maintenance effort and reduce the cost by avoiding wider analyses and verification than strictly necessary. This paper presents a sensitivity analysis-based technique which aims at measuring the ability of a system to contain a change (i.e., robustness) without the need to make a major re-design. The proposed technique exploits the safety margins in the budgeted failure probabilities of events in a probabilistic fault-tree analysis to compensate for unaccounted deficits or changes due to maintenance. The technique utilises safety contracts to provide prescriptive data for what is needed to be revisited and verified to maintain system safety when changes happen. We demonstrate the technique on an aircraft wheel braking system

    Establishing Confidence and Understanding Uncertainty in Real-Time Systems

    Get PDF

    Industrial Application of a Partitioning Scheduler to Support Mixed Criticality Systems

    Get PDF
    The ever-growing complexity of safety-critical control systems continues to require evolution in control system design, architecture and implementation. At the same time the cost of developing such systems must be controlled and importantly quality must be maintained. This paper examines the application of Mixed Criticality System (MCS) research to a DAL-A aircraft engine Full Authority Digital Engine Control (FADEC) system which includes studying porting the control system\u27s software to a preemptive scheduler from a non-preemptive scheduler. The paper deals with three key challenges as part of the technology transitions. Firstly, how to provide an equivalent level of fault isolation to ARINC 653 without the restriction of strict temporal slicing between criticality levels. Secondly extending the current analysis for Adaptive Mixed Criticality (AMC) scheduling to include the overheads of the system. Finally the development of clustering algorithms that automatically group tasks into larger super-tasks to both reduce overheads whilst ensuring the timing requirements, including the important task transaction requirements, are met

    Schedulability Analysis for Multi-Core Systems Accounting for Resource Stress and Sensitivity

    Get PDF
    Timing verification of multi-core systems is complicated by contention for shared hardware resources between co-running tasks on different cores. This paper introduces the Multi-core Resource Stress and Sensitivity (MRSS) task model that characterizes how much stress each task places on resources and how much it is sensitive to such resource stress. This model facilitates a separation of concerns, thus retaining the advantages of the traditional two-step approach to timing verification (i.e. timing analysis followed by schedulability analysis). Response time analysis is derived for the MRSS task model, providing efficient context-dependent and context independent schedulability tests for both fixed priority preemptive and fixed priority non-preemptive scheduling. Dominance relations are derived between the tests, and proofs of optimal priority assignment provided. The MRSS task model is underpinned by a proof-of-concept industrial case study

    Immune-Inspired Error Detection for Multiple Faulty Robots in Swarm Robotics

    Get PDF
    Error detection and recovery are important issues in swarm robotics research, as they are a means by which fault tolerance can be achieved. Our previous work has looked at error detection for single failures in a swarm robotics scenario with the Receptor Density Algorithm. Three modes of failure to the wheels of individual robots was investigated and comparable performance to other statistical methods was achieved. In this paper, we investigate the potential of extending this approach to a robot swarm with multiple faulty robots. Two experiements have been conducted: A swarm of ten robots with 1 to 8 faulty robots, and a swarm of 10 to 20 robots with varying number of faulty robots. Results from the experiments showed that the proposed approach is able to detect errors in multiple faulty robots. The results also suggest the need to further investigate other aspects of the robot swarm that can potentially affect the performance of detection such as the communication range.</p

    Analysis and Optimization of Message Acceptance Filter Configurations for Controller Area Network (CAN)

    Get PDF
    Many of the processors used in automotive Electronic Control Units (ECUs) are resource constrained due to the cost pressures of volume production; they have relatively low clock speeds and limited memory. Controller Area Network (CAN) is used to connect the various ECUs; however, the broadcast nature of CAN means that every message transmitted on the network can potentially cause additional processing load on the receiving nodes, whether the message is relevant to that ECU or not. Hardware filters can reduce or even eliminate this unnecessary load by filtering out messages that are not needed by the ECU. Filtering is done on the message IDs which are primarily used to identify the contents of the message and its priority. In this paper, we consider the problem of selecting filter configurations to minimize the load due to undesired messages. We show that the general problem is NP-complete. We therefore propose and evaluate an approach based on Simulated Annealing. We show that this approach nds near-optimal filter configurations for the interesting case where there are more desired messages than available filters

    An Enhanced Bailout Protocol for Mixed Criticality Embedded Software

    Get PDF
    To move mixed criticality research into industrial practice requires models whose run-time behaviour is acceptable to systems engineers. Certain aspects of current models, such as abandoning lower criticality tasks when certain situations arise, do not give the robustness required in application domains such as the automotive and aerospace industries. In this paper a new bailout protocol is developed that still guarantees high criticality software but minimises the negative impact on lower criticality software via a timely return to normal operation. We show how the bailout protocol can be integrated with existing techniques, utilising both offline slack and online gain-time to further improve performance. Static analysis is provided for schedulability guarantees, while scenario-based evaluation via simulation is used to explore the effectiveness of the protocol

    Extending a Task Allocation Algorithm for Graceful Degradation of Real-Time Distributed Embedded Systems

    Full text link
    Previous research which has considered task allocation and fault-tolerance together has concentrated on construct-ing schedules which accommodate a fixed number of redun-dant tasks. Often, all faults are treated as being equally severe. There is little work which combines task allocation with architectural level fault-tolerance issues such as the number of replicas to use and how they should be config-ured, both of which are tackled by this work. An accepted method for assessing the impact of a combination of faults is to build a system utility model which can be used to assess how the system degrades when components fail. The key challenge addressed here is how to design objective func-tions based on a utility model which can be incorporated into a search algorithm in order to optimise fault-tolerance properties. Other issues such as how to extend the local search neighbourhood and balance objectives with schedu-lability constraints are also discussed.
    • …
    corecore