11,574 research outputs found

    LOT: Logic Optimization with Testability - new transformations for logic synthesis

    Get PDF
    A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools

    A Framework for Robust Assimilation of Potentially Malign Third-Party Data, and its Statistical Meaning

    Full text link
    This paper presents a model-based method for fusing data from multiple sensors with a hypothesis-test-based component for rejecting potentially faulty or otherwise malign data. Our framework is based on an extension of the classic particle filter algorithm for real-time state estimation of uncertain systems with nonlinear dynamics with partial and noisy observations. This extension, based on classical statistical theories, utilizes statistical tests against the system's observation model. We discuss the application of the two major statistical testing frameworks, Fisherian significance testing and Neyman-Pearsonian hypothesis testing, to the Monte Carlo and sensor fusion settings. The Monte Carlo Neyman-Pearson test we develop is useful when one has a reliable model of faulty data, while the Fisher one is applicable when one may not have a model of faults, which may occur when dealing with third-party data, like GNSS data of transportation system users. These statistical tests can be combined with a particle filter to obtain a Monte Carlo state estimation scheme that is robust to faulty or outlier data. We present a synthetic freeway traffic state estimation problem where the filters are able to reject simulated faulty GNSS measurements. The fault-model-free Fisher filter, while underperforming the Neyman-Pearson one when the latter has an accurate fault model, outperforms it when the assumed fault model is incorrect.Comment: IEEE Intelligent Transportation Systems Magazine, special issue on GNSS-based positionin

    Risk analysis of autonomous vehicle and its safety impact on mixed traffic stream

    Get PDF
    In 2016, more than 35,000 people died in traffic crashes, and human error was the reason for 94% of these deaths. Researchers and automobile companies are testing autonomous vehicles in mixed traffic streams to eliminate human error by removing the human driver behind the steering wheel. However, recent autonomous vehicle crashes while testing indicate the necessity for a more thorough risk analysis. The objectives of this study were (1) to perform a risk analysis of autonomous vehicles and (2) to evaluate the safety impact of these vehicles in a mixed traffic stream. The overall research was divided into two phases: (1) risk analysis and (2) simulation of autonomous vehicles. Risk analysis of autonomous vehicles was conducted using the fault tree method. Based on failure probabilities of system components, two fault tree models were developed and combined to predict overall system reliability. It was found that an autonomous vehicle system could fail 158 times per one-million miles of travel due to either malfunction in vehicular components or disruption from infrastructure components. The second phase of this research was the simulation of an autonomous vehicle, where change in crash frequency after autonomous vehicle deployment in a mixed traffic stream was assessed. It was found that average travel time could be reduced by about 50%, and 74% of conflicts, i.e., traffic crashes, could be avoided by replacing 90% of the human drivers with autonomous vehicles

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Fault Detection for Systems with Multiple Unknown Modes and Similar Units

    Get PDF
    This dissertation considers fault detection for large-scale practical systems with many nearly identical units operating in a shared environment. A special class of hybrid system model is introduced to describe such multi-unit systems, and a general approach for estimation and change detection is proposed. A novel fault detection algorithm is developed based on estimating a common Gaussian-mixture distribution for unit parameters whereby observations are mapped into a common parameter-space and clusters are then identified corresponding to different modes of operation via the Expectation- Maximization algorithm. The estimated common distribution incorporates and generalizes information from all units and is utilized for fault detection in each individual unit. The proposed algorithm takes into account unit mode switching, parameter drift, and can handle sudden, incipient, and preexisting faults. It can be applied to fault detection in various industrial, chemical, or manufacturing processes, sensor networks, and others. Several illustrative examples are presented, and a discussion on the pros and cons of the proposed methodology is provided. The proposed algorithm is applied specifically to fault detection in Heating Ventilation and Air Conditioning (HVAC) systems. Reliable and timely fault detection is a significant (and still open) practical problem in the HVAC industry { commercial buildings waste an estimated 15% to 30% (20.8B20.8B - 41.61B annually) of their energy due to degraded, improperly controlled, or poorly maintained equipment. Results are presented from an extensive performance study based on both Monte Carlo simulations as well as real data collected from three operational large HVAC systems. The results demonstrate the capabilities of the new methodology in a more realistic setting and provide insights that can facilitate the design and implementation of practical fault detection for systems of similar type in other industrial applications
    corecore