454 research outputs found
Design, Analysis and Test of Logic Circuits under Uncertainty.
Integrated circuits are increasingly susceptible to uncertainty caused by soft
errors, inherently probabilistic devices, and manufacturing variability. As device technologies
scale, these effects become detrimental to circuit reliability. In order to address
this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic
effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer
that uses functional-simulation signatures to capture error effects, 2) novel design techniques
that improve reliability using little area and performance overhead, 3) a matrix-based
reliability-analysis framework that captures many types of probabilistic faults, and
4) test-generation/compaction methods aimed at probabilistic faults in logic circuits.
SER analysis must account for the main error-masking mechanisms in ICs: logic,
timing, and electrical masking. We relate logic masking to node testability of the circuit
and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute
estability (signal probability and observability). To account for timing masking, we compute
error-latching windows (ELWs) from timing analysis information. Electrical masking
is incorporated into our estimates through derating factors for gate error probabilities. The
SER of a circuit is computed by combining the effects of all three masking mechanisms
within our SER analyzer called AnSER.
Using AnSER, we develop several low-overhead techniques that increase reliability,
including: 1) an SER-aware design method that uses redundancy already present within
the circuit, 2) a technique that resynthesizes small logic windows to improve area and
reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs.
We develop the probabilistic transfer matrix (PTM) modeling framework to analyze
effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs)
to improve computational efficiency. Several ADD algorithms are developed to extract
reliability and error susceptibility information from PTMs representing circuits.
We propose new algorithms for circuit testing under probabilistic faults, which require
a reformulation of existing test techniques. For instance, a test vector may need to be
repeated many times to detect a fault. Also, different vectors detect the same fault with
different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd
Software reliability through fault-avoidance and fault-tolerance
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down
Keeping up with the Joneses: An international asset pricing model
We derive an international asset pricing model that assumes local investors have preferences of the type "keeping up with the Joneses." In an international setting investors compare their current wealth with that of their peers who live in the same country. In the process of inferring the country's average wealth, investors incorporate information from the domestic market portfolio. In equilibrium, this gives rise to a multifactor CAPM where, together with the world market price of risk, there exists country-speciffic prices of risk associated with deviations from the country's average wealth level. The model performs signifficantly better, in terms of explaining cross-section of returns, than the international CAPM. Moreover, the results are robust, both for conditional and unconditional tests, to the inclusion of currency risk, macroeconomic sources of risk and the Fama and French HML factor.Consumption externalities, multifactor asset pricing model
System Qualities Ontology, Tradespace and Affordability (SQOTA) Project – Phase 4
This task was proposed and established as a result of a pair of 2012 workshops sponsored by the DoD Engineered Resilient Systems technology priority area and by the SERC. The workshops focused on how best to strengthen DoD’s capabilities in dealing with its systems’ non-functional requirements, often also called system qualities, properties, levels of service, and –ilities. The term –ilities was often used during the workshops, and became the title of the resulting SERC research task: “ilities Tradespace and Affordability Project (iTAP).” As the project progressed, the term “ilities” often became a source of confusion, as in “Do your results include considerations of safety, security, resilience, etc., which don’t have “ility” in their names?” Also, as our ontology, methods, processes, and tools became of interest across the DoD and across international and standards communities, we found that the term “System Qualities” was most often used. As a result, we are changing the name of the project to “System Qualities Ontology, Tradespace, and Affordability (SQOTA).” Some of this year’s university reports still refer to the project as “iTAP.”This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant of Defense for Research and Engineering (ASD(R&E)) under Contract HQ0034-13-D-0004.This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant of Defense for Research and Engineering (ASD(R&E)) under Contract HQ0034-13-D-0004
Recommended from our members
Improving System Reliability for Cyber-Physical Systems
Cyber-physical systems (CPS) are systems featuring a tight combination of, and coordination between, the system's computational and physical elements. Cyber-physical systems include systems ranging from critical infrastructure such as a power grid and transportation system to health and biomedical devices. System reliability, i.e., the ability of a system to perform its intended function under a given set of environmental and operational conditions for a given period of time, is a fundamental requirement of cyber-physical systems. An unreliable system often leads to disruption of service, financial cost and even loss of human life. An important and prevalent type of cyber-physical system meets the following criteria: processing large amounts of data; employing software as a system component; running online continuously; having operator-in-the-loop because of human judgment and an accountability requirement for safety critical systems. This thesis aims to improve system reliability for this type of cyber-physical system. To improve system reliability for this type of cyber-physical system, I present a system evaluation approach entitled automated online evaluation (AOE), which is a data-centric runtime monitoring and reliability evaluation approach that works in parallel with the cyber-physical system to conduct automated evaluation along the workflow of the system continuously using computational intelligence and self-tuning techniques and provide operator-in-the-loop feedback on reliability improvement. For example, abnormal input and output data at or between the multiple stages of the system can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop. The operator can then take actions and make changes to the system based on the alerts in order to achieve minimal system downtime and increased system reliability. One technique used by the approach is data quality analysis using computational intelligence, which applies computational intelligence in evaluating data quality in an automated and efficient way in order to make sure the running system perform reliably as expected. Another technique used by the approach is self-tuning which automatically self-manages and self-configures the evaluation system to ensure that it adapts itself based on the changes in the system and feedback from the operator. To implement the proposed approach, I further present a system architecture called autonomic reliability improvement system (ARIS). This thesis investigates three hypotheses. First, I claim that the automated online evaluation empowered by data quality analysis using computational intelligence can effectively improve system reliability for cyber-physical systems in the domain of interest as indicated above. In order to prove this hypothesis, a prototype system needs to be developed and deployed in various cyber-physical systems while certain reliability metrics are required to measure the system reliability improvement quantitatively. Second, I claim that the self-tuning can effectively self-manage and self-configure the evaluation system based on the changes in the system and feedback from the operator-in-the-loop to improve system reliability. Third, I claim that the approach is efficient. It should not have a large impact on the overall system performance and introduce only minimal extra overhead to the cyberphysical system. Some performance metrics should be used to measure the efficiency and added overhead quantitatively. Additionally, in order to conduct efficient and cost-effective automated online evaluation for data-intensive CPS, which requires large volumes of data and devotes much of its processing time to I/O and data manipulation, this thesis presents COBRA, a cloud-based reliability assurance framework. COBRA provides automated multi-stage runtime reliability evaluation along the CPS workflow using data relocation services, a cloud data store, data quality analysis and process scheduling with self-tuning to achieve scalability, elasticity and efficiency. Finally, in order to provide a generic way to compare and benchmark system reliability for CPS and to extend the approach described above, this thesis presents FARE, a reliability benchmark framework that employs a CPS reliability model, a set of methods and metrics on evaluation environment selection, failure analysis, and reliability estimation. The main contributions of this thesis include validation of the above hypotheses and empirical studies of ARIS automated online evaluation system, COBRA cloud-based reliability assurance framework for data-intensive CPS, and FARE framework for benchmarking reliability of cyber-physical systems. This work has advanced the state of the art in the CPS reliability research, expanded the body of knowledge in this field, and provided some useful studies for further research
Third Conference on Artificial Intelligence for Space Applications, part 2
Topics relative to the application of artificial intelligence to space operations are discussed. New technologies for space station automation, design data capture, computer vision, neural nets, automatic programming, and real time applications are discussed
Quantitative Measures for Software Independent Verification and Validation
As software is maintained or reused, it undergoes an evolution which tends to increase the overall complexity of the code. To understand the effects of this, we brought in statistics experts and leading researchers in software complexity, reliability, and their interrelationships. These experts' project has resulted in our ability to statistically correlate specific code complexity attributes, in orthogonal domains, to errors found over time in the HAL/S flight software which flies in the Space Shuttle. Although only a prototype-tools experiment, the result of this research appears to be extendable to all other NASA software, given appropriate data similar to that logged for the Shuttle onboard software. Our research has demonstrated that a more complete domain coverage can be mathematically demonstrated with the approach we have applied, thereby ensuring full insight into the cause-and-effects relationship between the complexity of a software system and the fault density of that system. By applying the operational profile we can characterize the dynamic effects of software path complexity under this same approach We now have the ability to measure specific attributes which have been statistically demonstrated to correlate to increased error probability, and to know which actions to take, for each complexity domain. Shuttle software verifiers can now monitor the changes in the software complexity, assess the added or decreased risk of software faults in modified code, and determine necessary corrections. The reports, tool documentation, user's guides, and new approach that have resulted from this research effort represent advances in the state of the art of software quality and reliability assurance. Details describing how to apply this technique to other NASA code are contained in this document
- …