1,362 research outputs found

    Use of COTS functional analysis software as an IVHM design tool for detection and isolation of UAV fuel system faults

    Get PDF
    This paper presents a new approach to the development of health management solutions which can be applied to both new and legacy platforms during the conceptual design phase. The approach involves the qualitative functional modelling of a system in order to perform an Integrated Vehicle Health Management (IVHM) design – the placement of sensors and the diagnostic rules to be used in interrogating their output. The qualitative functional analysis was chosen as a route for early assessment of failures in complex systems. Functional models of system components are required for capturing the available system knowledge used during various stages of system and IVHM design. MADe™ (Maintenance Aware Design environment), a COTS software tool developed by PHM Technology, was used for the health management design. A model has been built incorporating the failure diagrams of five failure modes for five different components of a UAV fuel system. Thus an inherent health management solution for the system and the optimised sensor set solution have been defined. The automatically generated sensor set solution also contains a diagnostic rule set, which was validated on the fuel rig for different operation modes taking into account the predicted fault detection/isolation and ambiguity group coefficients. It was concluded that when using functional modelling, the IVHM design and the actual system design cannot be done in isolation. The functional approach requires permanent input from the system designer and reliability engineers in order to construct a functional model that will qualitatively represent the real system. In other words, the physical insight should not be isolated from the failure phenomena and the diagnostic analysis tools should be able to adequately capture the experience bases. This approach has been verified on a laboratory bench top test rig which can simulate a range of possible fuel system faults. The rig is fully instrumented in order to allow benchmarking of various sensing solution for fault detection/isolation that were identified using functional analysis

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecánicos con componentes electrónicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrónicos así como su cometido, hacen de su seguridad una cuestión de creciente importancia. Tanto es así que la comercialización de estos sistemas críticos está sujeta a rigurosos procesos de certificación donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducción de procesadores multi-núcleo en dichos sistemas críticos: aunque su mayor rendimiento despierta el interés de la industria para integrar múltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafía su análisis temporal mediante los métodos tradicionales y, asimismo, su certificación es cada vez más compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa técnica de análisis temporal probabilístico basado en medidas (MBPTA). La innovación de esta técnica, sin embargo, supone un gran cambio cultural respecto a los estándares y procedimientos tradicionales de certificación. En esta línea, las contribuciones de esta tesis están agrupadas en tres ejes principales: (i) definición de argumentos de seguridad para la certificación de aplicaciones de criticidad-mixta sobre plataformas multi-núcleo. Se definen, en particular, mecanismos de seguridad, técnicas de diagnóstico y reacción de faltas acorde con el estándar IEC 61508 sobre una arquitectura multi-núcleo de referencia. Respecto al análisis temporal, (ii) presentamos la cuantificación de la probabilidad de exceder un límite temporal y su relación con los requisitos de reducción de riesgos derivados de los estándares de seguridad funcional. Con este fin, nos basamos en la técnica MBPTA y presentamos el diseño de una fuente de números aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por último, (iii) extrapolamos las guías actuales para la certificación de arquitecturas multi-núcleo a una solución comercial de 8 núcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-núcleo y MBPTA implican en el proceso de certificación de sistemas críticos de tiempo real y facilita, de esta forma, su adopción por la industria.Postprint (published version

    Multi-core devices for safety-critical systems: a survey

    Get PDF
    Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    The application of Bayesian change point detection in UAV fuel systems

    Get PDF
    AbstractA significant amount of research has been undertaken in statistics to develop and implement various change point detection techniques for different industrial applications. One of the successful change point detection techniques is Bayesian approach because of its strength to cope with uncertainties in the recorded data. The Bayesian Change Point (BCP) detection technique has the ability to overcome the uncertainty in estimating the number and location of change point due to its probabilistic theory. In this paper we implement the BCP detection technique to a laboratory based fuel rig system to detect the change in the pre-valve pressure signal due to a failure in the valve. The laboratory test-bed represents a Unmanned Aerial Vehicle (UAV) fuel system and its associated electrical power supply, control system and sensing capabilities. It is specifically designed in order to replicate a number of component degradation faults with high accuracy and repeatability so that it can produce benchmark datasets to demonstrate and assess the efficiency of the BCP algorithm. Simulation shows satisfactory results of implementing the proposed BCP approach. However, the computational complexity, and the high sensitivity due to the prior distribution on the number and location of the change points are the main disadvantages of the BCP approac

    Considerations for a design and operations knowledge support system for Space Station Freedom

    Get PDF
    Engineering and operations of modern engineered systems depend critically upon detailed design and operations knowledge that is accurate and authoritative. A design and operations knowledge support system (DOKSS) is a modern computer-based information system providing knowledge about the creation, evolution, and growth of an engineered system. The purpose of a DOKSS is to provide convenient and effective access to this multifaceted information. The complexity of Space Station Freedom's (SSF's) systems, elements, interfaces, and organizations makes convenient access to design knowledge especially important, when compared to simpler systems. The life cycle length, being 30 or more years, adds a new dimension to space operations, maintenance, and evolution. Provided here is a review and discussion of design knowledge support systems to be delivered and operated as a critical part of the engineered system. A concept of a DOKSS for Space Station Freedom (SSF) is presented. This is followed by a detailed discussion of a DOKSS for the Lyndon B. Johnson Space Center and Work Package-2 portions of SSF

    Development of a Security Methodology for Cooperative Information Systems: The CooPSIS Project

    Get PDF
    Since networks and computing systems are vital components of today\u27s life, it is of utmost importance to endow them with the capability to survive physical and logical faults, as well as malicious or deliberate attacks. When the information system is obtained by federating pre-existing local systems, a methodology is needed to integrate security policies and mechanisms under a uniform structure. Therefore, in building distributed information systems, a methodology for analysis, design and implementation of security requirements of data and processes is essential for obtaining mutual trust between cooperating organizations. Moreover, when the information system is built as a cooperative set of e-services, security is related to the type of data, to the sensitivity context of the cooperative processes and to the security characteristics of the communication paradigms. The CoopSIS (Cooperative Secure Information Systems) project aims to develop methods and tools for the analysis, design, implementation and evaluation of secure and survivable distributed information systems of cooperative type, in particular with experimentation in the Public Administration Domain. This paper presents the basic issues of a methodology being conceived to build a trusted cooperative environment, where data sensitivity parameters and security requirements of processes are taken into account. The milestones phases of the security development methodology in the context of this project are illustrated

    Quality of Service Contract Specification, Establishment, and Monitoring for Service Level Management

    Get PDF
    This paper describes a Quality of Service (QoS) management approach and architecture as well as a case study for Service Level Management (SLM). Our approach brings in a new perspective to the SLM probem by using QoS management and QoS Contract specification, establishment, and monitoring. In SLM, the service consumer side and the service provider side must share a common understanding of QoS characteristics and use a common language for specifying desired QoS parameters in the form of QoS contracts. A service consumer must negotiate with the service provider to establish mutually agreed QoS contracts for an interaction session. When establising a new QoS contract, the service provider must consider both QoS contracts already agreed upon with existing consumers and system resource conditions. Similarly, a service consumer must be prepared in revising its contract with the service provider as conditions change over time. Once a QoS contract is established, SLM must monitor QoS status to make sure that the service quality is provided at the agreed range. If necessary, SLM must activate adaptation mechanisms to bring the service quality to the desired level. A case study is presented in this paper to validate the QoS contract management design approach and architecture for SLM.

    Mixed-Criticality Systems on Commercial-Off-the-Shelf Multi-Processor Systems-on-Chip

    Get PDF
    Avionics and space industries are struggling with the adoption of technologies like multi-processor system-on-chips (MPSoCs) due to strict safety requirements. This thesis propose a new reference architecture for MPSoC-based mixed-criticality systems (MCS) - i.e., systems integrating applications with different level of criticality - which are a common use case for aforementioned industries. This thesis proposes a system architecture capable of granting partitioning - which is, for short, the property of fault containment. It is based on the detection of spatial and temporal interference, and has been named the online detection of interference (ODIn) architecture. Spatial partitioning requires that an application is not able to corrupt resources used by a different application. In the architecture proposed in this thesis, spatial partitioning is implemented using type-1 hypervisors, which allow definition of resource partitions. An application running in a partition can only access resources granted to that partition, therefore it cannot corrupt resources used by applications running in other partitions. Temporal partitioning requires that an application is not able to unexpectedly change the execution time of other applications. In the proposed architecture, temporal partitioning has been solved using a bounded interference approach, composed of an offline analysis phase and an online safety net. The offline phase is based on a statistical profiling of a metric sensitive to temporal interference’s, performed in nominal conditions, which allows definition of a set of three thresholds: 1. the detection threshold TD; 2. the warning threshold TW ; 3. the α threshold. Two rules of detection are defined using such thresholds: Alarm rule When the value of the metric is above TD. Warning rule When the value of the metric is in the warning region [TW ;TD] for more than α consecutive times. ODIn’s online safety-net exploits performance counters, available in many MPSoC architectures; such counters are configured at bootstrap to monitor the selected metric(s), and to raise an interrupt request (IRQ) in case the metric value goes above TD, implementing the alarm rule. The warning rule is implemented in a software detection module, which reads the value of performance counters when the monitored task yields control to the scheduler and reset them if there is no detection. ODIn also uses two additional detection mechanisms: 1. a control flow check technique, based on compile-time defined block signatures, is implemented through a set of watchdog processors, each monitoring one partition. 2. a timeout is implemented through a system watchdog timer (SWDT), which is able to send an external signal when the timeout is violated. The recovery actions implemented in ODIn are: • graceful degradation, to react to IRQs of WDPs monitoring non-critical applications or to warning rule violations; it temporarily stops non-critical applications to grant resources to the critical application; • hard recovery, to react to the SWDT, to the WDP of the critical application, or to alarm rule violations; it causes a switch to a hot stand-by spare computer. Experimental validation of ODIn was performed on two hardware platforms: the ZedBoard - dual-core - and the Inventami board - quad-core. A space benchmark and an avionic benchmark were implemented on both platforms, composed by different modules as showed in Table 1 Each version of the final application was evaluated through fault injection (FI) campaigns, performed using a specifically designed FI system. There were three types of FI campaigns: 1. HW FI, to emulate single event effects; 2. SW FI, to emulate bugs in non-critical applications; 3. artificial bug FI, to emulate a bug in non-critical applications introducing unexpected interference on the critical application. Experimental results show that ODIn is resilient to all considered types of faul
    • …
    corecore