102 research outputs found

    Automated Observability Investigation of Analog Electronic Circuits using SPICE

    Get PDF
    In the present paper, a computer-aided approach to fault observability investigation of linear analog circuits is developed. The method is based on sensitivity investigation of the test characteristics in the frequency domain. The test frequencies are selected maximizing the sensitivity of the magnitude of the test characteristics. Applying postprocessing of the simulation results using macrodefinitions in the graphical analyzer Probe, a fault observability investigation of the circuit is performed. A number of sensitivity measures are defined in Probe for observability investigation of multiple faults using pre-defined macrodefinitions. The sensitivity of S-parameters is obtained in order to investigate the fault observability at RF

    Fault observability in distributed power system

    Get PDF
    Fault observability as well as fault location algorithms in distributed power system are studied in this thesis. The importance of finding the fault location in a distribution system with the purpose of increasing reliability and decreasing the maintenance time and cost is discussed. Then, different existing fault location algorithms and approaches in the literature are introduced and compared. Subsequently, a new strategy to achieve fault observability of power systems while aiming minimum required number of Phasor Measurement Units (PMUs) in the network is proposed. The method exploits the nodal voltage and mesh current analyses where the impedance and admittance matrices of the network and its dual circuit are developed and utilized for fault location. The criterion of determining the number and the places of PMUs is that the fault location and impedance can be obtained in a unique manner without multi estimation. In addition, the method considers faults along the lines as opposed to the faults only on system buses available in the literature. The proposed approach provides an economical solution to decrease measurement costs for large power networks, distributed generation networks, and micro grids. Simulation results for IEEE 7-bus, 14-bus, and 30-bus systems verify the effectiveness of the proposed approach

    Low-overhead fault-tolerant logic for field-programmable gate arrays

    Get PDF
    While allowing for the fabrication of increasingly complex and efficient circuitry, transistor shrinkage and count-per-device expansion have major downsides: chiefly increased variation, degradation and fault susceptibility. For this reason, design-time consideration of faults will have to be given to increasing numbers of electronic systems in the future to ensure yields, reliabilities and lifetimes remain acceptably high. Many mathematical operators commonly accelerated in hardware are suited to modification resulting in datapath error detection and correction capabilities with far lower area, performance and/or power consumption overheads than those incurred through the utilisation of more established, general-purpose fault tolerance methods such as modular redundancy. Field-programmable gate arrays are uniquely placed to allow further area savings to be made thanks to their dynamic reconfigurability. The majority of the technical work presented within this thesis is based upon a benchmark hardware accelerator---a matrix multiplier---that underwent several evolutions in order to detect and correct faults manifesting along its datapath at runtime. In the first instance, fault detectability in excess of 99% was achieved in return for 7.87% additional area and 45.5% extra latency. In the second, the ability to correct errors caused by those faults was added at the cost of 4.20% more area, while 50.7% of this---and 46.2% of the previously incurred latency overhead---was removed through the introduction of partial reconfiguration in the third. The fourth demonstrates further reductions in both area and performance overheads---of 16.7% and 8.27%, respectively---through systematic data width reduction by allowing errors of less than ±0.5% of the maximum output value to propagate.Open Acces

    Protection Challenges of Distributed Energy Resources Integration In Power Systems

    Get PDF
    It is a century that electrical power system are the main source of energy for the societies and industries. Most parts of these infrastructures are built long time ago. There are plenty of high rating high voltage equipment which are designed and manufactured in mid-20th and are currently operating in United States’ power network. These assets are capable to do what they are doing now. However, the issue rises with the recent trend, i.e. DERs integration, causing fundamental changes in electrical power systems and violating traditional network design basis in various ways. Recently, there have been a steep rise in demands for Distributed Energy Resources (DERs) integration. There are various incentives for demand in such integrations and employment of distributed and renewable energy resources. However, it violates the most fundamental assumption in power system traditional designs. That is the power flows from the generation (upstream) toward the load locations (downstream). Currently operating power systems are designed based on this assumption and consequently their equipment ratings, operational details, protection schemes, and protections settings. Violating these designs and operational settings leads toward reducing the power reliability and increasing outages, which are opposite of the DERs integration goals. The DERs integration and its consequences happen in both transmission and distribution levels. Both of these networks effects of DERs integration are discussed in this dissertation. The transmission level issues are explained in brief and more analytical approach while the transmission network challenges are provided in details using both field data and simulation results. It is worth mentioning that DERs integration is aligned with the goal to lead toward a smart grid. This can be considered the most fundamental network reconfiguration that has ever experienced and requires various preparations. Both long term and short term solutions are proposed for the explained challenges and corresponding results are provided to illustrate the effectiveness of the proposed solutions. The author believes that developing and considering short term solutions can make the transition period toward reaching the smart grid possible. Meanwhile, long term approaches should also be planned for the final smart grid development and operation details

    Dynamic Modeling, Sensor Placement Design, and Fault Diagnosis of Nuclear Desalination Systems

    Get PDF
    Fault diagnosis of sensors, devices, and equipment is an important topic in the nuclear industry for effective and continuous operation of nuclear power plants. All the fault diagnostic approaches depend critically on the sensors that measure important process variables. Whenever a process encounters a fault, the effect of the fault is propagated to some or all the process variables. The ability of the sensor network to detect and isolate failure modes and anomalous conditions is crucial for the effectiveness of a fault detection and isolation (FDI) system. However, the emphasis of most fault diagnostic approaches found in the literature is primarily on the procedures for performing FDI using a given set of sensors. Little attention has been given to actual sensor allocation for achieving the efficient FDI performance. This dissertation presents a graph-based approach that serves as a solution for the optimization of sensor placement to ensure the observability of faults, as well as the fault resolution to a maximum possible extent. This would potentially facilitate an automated sensor allocation procedure. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data, and to fit a hyper-plane to the data. The fault directions for different fault scenarios are obtained from the prediction errors, and fault isolation is then accomplished using new projections on these fault directions. The effectiveness of the use of an optimal sensor set versus a reduced set for fault detection and isolation is demonstrated using this technique. Among a variety of desalination technologies, the multi-stage flash (MSF) processes contribute substantially to the desalinating capacity in the world. In this dissertation, both steady-state and dynamic simulation models of a MSF desalination plant are developed. The dynamic MSF model is coupled with a previously developed International Reactor Innovative and Secure (IRIS) model in the SIMULINK environment. The developed sensor placement design and fault diagnostic methods are illustrated with application to the coupled nuclear desalination system. The results demonstrate the effectiveness of the newly developed integrated approach to performance monitoring and fault diagnosis with optimized sensor placement for large industrial systems

    The reliability of single-error protected computer memories

    Get PDF
    The lifetimes of computer memories which are protected with single-error-correcting-double-error-detecting (SEC-DED) codes are studies. The authors assume that there are five possible types of memory chip failure (single-cell, row, column, row-column and whole chip), and, after making a simplifying assumption (the Poisson assumption), have substantiated that experimentally. A simple closed-form expression is derived for the system reliability function. Using this formula and chip reliability data taken from published tables, it is possible to compute the mean time to failure for realistic memory systems

    Review of selection criteria for sensor and actuator configurations suitable for internal combustion engines

    Get PDF
    This literature review considers the problem of finding a suitable configuration of sensors and actuators for the control of an internal combustion engine. It takes a look at the methods, algorithms, processes, metrics, applications, research groups and patents relevant for this topic. Several formal metric have been proposed, but practical use remains limited. Maximal information criteria are theoretically optimal for selecting sensors, but hard to apply to a system as complex and nonlinear as an engine. Thus, we reviewed methods applied to neighboring fields including nonlinear systems and non-minimal phase systems. Furthermore, the closed loop nature of control means that information is not the only consideration, and speed, stability and robustness have to be considered. The optimal use of sensor information also requires the use of models, observers, state estimators or virtual sensors, and practical acceptance of these remains limited. Simple control metrics such as conditioning number are popular, mostly because they need fewer assumptions than closed-loop metrics, which require a full plant, disturbance and goal model. Overall, no clear consensus can be found on the choice of metrics to define optimal control configurations, with physical measures, linear algebra metrics and modern control metrics all being used. Genetic algorithms and multi-criterial optimisation were identified as the most widely used methods for optimal sensor selection, although addressing the dimensionality and complexity of formulating the problem remains a challenge. This review does present a number of different successful approaches for specific applications domains, some of which may be applicable to diesel engines and other automotive applications. For a thorough treatment, non-linear dynamics and uncertainties need to be considered together, which requires sophisticated (non-Gaussian) stochastic models to establish the value of a control architecture

    Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

    Get PDF
    This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process
    corecore