710 research outputs found

    Pipeline Monitoring Architecture based on observability and controllability Analysis

    Get PDF
    Recently many techniques with different applicability have been developed for damage detection in the pipeline. The pipeline system is designed as a distributed parameter system, where the state space of the distributed parameter system has infinite dimension. This paper is dedicated to the problem of observability as well as controllability analysis in the pipeline systems. Some theorems are presented in order to test the observability and controllability of the system. Computing the rank of the controllability and observability matrix is carried out using Matlab

    Modelling and Analysis of Flow Rate and Pressure Head in Pipelines

    Get PDF
    Currently, various approaches with several utilities are proposed to identify damage in the pipeline. The pipeline system is modeled in the form of a distributed parameter system, such that the state space related to the distributed parameter system contains infinite dimension. In this paper, a novel technique is proposed to analyze and model the flow in the pipeline. Important theorems are proposed for testing the observability as well as controllability of the proposed model

    Blockage Detection in Pipeline Based on the Extended Kalman Filter Observer

    Get PDF
    Currently numerous approaches with various applicability have been generated in order to detect damage in pipe networks. Pipeline faults such as leaks and partial or complete blockages usually create serious problems for engineers. The model-based leak, as well as block detection methods for the pipeline systems gets more and more attention. Among these model-based methods, the state observer and state feedback based methods are usually used. While the observability, as well as controllability, are taken to be the prerequisites for utilizing these techniques. In this work, a new technique based on the extended Kalman filter observer is proposed in order to detect and locate the blockage in the pipeline. Furthermore, the analysis of observability and controllability in the pipe networks is investigated. Important theorems are given for testing the observability as well as controllability of the pipeline system

    An Automated Continuous Integration Multitest Platform for Automotive Systems

    Get PDF
    Testing has always been a crucial part of application development. It involves different techniques for verifying and validating the features of the target systems. For a complicated and/or complex system, tests are preferred to be carried out in different stages of the development process and as early as possible to avoid extra costs due to the errors caught at later stages. With the increasing system complexity, the cost of testing is also increasing in terms of resources and time, which introduce further impact against development constraints such as time-to-market. On the other hand, more and more associated electronic components lead to an ever-increasing system complexity in high reliable applications such as automotive ones different from heterogeneous systems such as advanced driver assistance systems, sensor fusion systems, etc. In this article, we present a testing framework utilizing the continuous integration (CI) solution from software engineering, a commercial virtual platform, and a hardware field programmable gate array based verification platform focusing on the engine control unit to demonstrate the feasibility of the proposed method. The efficiency and viability of the CI method have been demonstrated on a real heterogeneous automotive system

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Software-Based Self-Test of Set-Associative Cache Memories

    Get PDF
    Embedded microprocessor cache memories suffer from limited observability and controllability creating problems during in-system tests. This paper presents a procedure to transform traditional march tests into software-based self-test programs for set-associative cache memories with LRU replacement. Among all the different cache blocks in a microprocessor, testing instruction caches represents a major challenge due to limitations in two areas: 1) test patterns which must be composed of valid instruction opcodes and 2) test result observability: the results can only be observed through the results of executed instructions. For these reasons, the proposed methodology will concentrate on the implementation of test programs for instruction caches. The main contribution of this work lies in the possibility of applying state-of-the-art memory test algorithms to embedded cache memories without introducing any hardware or performance overheads and guaranteeing the detection of typical faults arising in nanometer CMOS technologie

    Leakage Detection in Pipeline Based on Second Order Extended Kalman Filter Observer

    Get PDF
    In this paper, a new technique is proposed in order to detect, locate, as well as approximate the fluid leaks in a straight pipeline (without branching) by taking into consideration the pressure and flow evaluations at the ends of pipeline on the basis of data fusion from two methods: a steady-state approximation and Second-order Extended Kalman Filter (SEKF). The SEKF is on the basis of the second-order Taylor expansion of a nonlinear system unlike to the more popular First-order Extended Kalman Filter (FEKF). The suggested technique in this paper deals with just pressure head and flow rate evaluations at the ends of pipeline that has intrinsic sensor as well as process noise. A simulation example is given for demonstrating the validity of the proposed technique. It shows that the extended Kalman particle filter algorithm on the basis of the second-order Taylor expansion is effective and performs well in decreasing systematic deviations as well as running time

    Deep Learning for Pipeline Damage Detection: an Overview of the Concepts and a Survey of the State-of-the-Art

    Get PDF
    Pipelines have been extensively implemented to transfer oil as well as gas products at wide distances as they are safe, and suitable. However, numerous sorts of damages may happen to the pipeline, for instance erosion, cracks, and dent. Hence, if these faults are not properly refit will result in the pipeline demolitions having leak or segregation which leads to tremendously environment risks. Deep learning methods aid operators to recognize the earliest phases of threats to the pipeline, supplying them time and information in order to handle the problem efficiently. This paper illustrates fundamental implications of deep learning comprising convolutional neural networks. Furthermore the usages of deep learning approaches for hampering pipeline detriment through the earliest diagnosis of threats are introduced

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions
    corecore