1,346 research outputs found
On the diagnostic emulation technique and its use in the AIRLAB
An aid is presented for understanding and judging the relevance of the diagnostic emulation technique to studies of highly reliable, digital computing systems for aircraft. A short review is presented of the need for and the use of the technique as well as an explanation of its principles of operation and implementation. Details that would be needed for operational control or modification of existing versions of the technique are not described
Reliability and Maintenance
Amid a plethora of challenges, technological advances in science and engineering are inadvertently affecting an increased spectrum of today’s modern life. Yet for all supplied products and services provided, robustness of processes, methods, and techniques is regarded as a major player in promoting safety. This book on systems reliability, which equally includes maintenance-related policies, presents fundamental reliability concepts that are applied in a number of industrial cases. Furthermore, to alleviate potential cost and time-specific bottlenecks, software engineering and systems engineering incorporate approximation models, also referred to as meta-processes, or surrogate models to reproduce a predefined set of problems aimed at enhancing safety, while minimizing detrimental outcomes to society and the environment
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Optimization techniques for prognostics of on-board electromechanical servomechanisms affected by progressive faults
In relatively recent years, electromechanical actuators (EMAs) have gradually replaced systems based on hydraulic power for flight control applications. EMAs are typically operated by electrical machines that transfer rotational power to the controlled elements (e.g. the aerodynamic control surfaces) by means of gearings and mechanical transmission. Compared to electrohydraulic systems, EMAs offer several advantages, such as reduced weight, simplified maintenance and complete elimination of contaminant, flammable or polluting hydraulic fluids. On-board actuators are often safety critical; then, the practice of monitoring and analyzing the system response through electrical acquisitions, with the aim of estimating fault evolution, has gradually become an essential task of the system engineering. For this purpose, a new discipline, called Prognostics, has been developed in recent years. Its aim is to study methodologies and algorithms capable of identifying such failures and foresee the moment when a particular component loses functionality and is no longer able to meet the desired performance. In this paper, authors introduce the use of optimization techniques in prognostic methods (e.g. model-based parametric estimation algorithms) and propose a new model-based fault detection and identification (FDI) method, based on Genetic Algorithms (GAs) optimization approach, able to perform an early identification of the aforesaid progressive failures, investigating its ability to timely identify symptoms alerting that a component is degrading
Susceptible Workload Evaluation and Protection using Selective Fault Tolerance
This is an Open Access article distributed under the terms of the Creative Commons Attribution International License CC-BY 4.0 ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%.Peer reviewedFinal Published versio
Recommended from our members
Simulation of dynamic systems with uncertain parameters
This dissertation describes numerical methods for representation and
simulation of dynamic systems with time invariant uncertain parameters. Simulation is defined as computing a boundary of the system response that contains all the possible behaviors of an uncertain system. This problem features
many challenges, especially those associated with minimizing the computational cost due to global optimization. To reduce computational cost, an
approximation or surrogate of the original system model is constructed by employing Moving Least Square (MLS) Response Surface Method for non-convex
global optimization. For more complicated systems, a gradient enhanced moving least square (GEMLS) response surface is used to construct the surrogate
model more accurately and efficiently. This method takes advantage of the
fact that parametric sensitivity of an ODE system can be calculated as a by-product with less computational cost when solving the original system. Furthermore, global sensitivity analysis for monotonic testing can be introduced
in some cases to further reduce the number of samples. The proposed method
has been applied to two engineering applications. The first is hybrid system
verification by reachable set computing/approximation. First, the computational burden of using polyhedron for reachable set approximation is reviewed.
It is then proven that the boundary of a reachable set is formed only by the
trajectories from the boundary of an initial state region. This result reduces
the search space from R
n
to R
n−1
. Finally, the GEMLS method proposed is
integrated with oriented rectangular hull for reachable set representation and
an approximation with improved accuracy and efficiency can be achieved. Another engineering application is model-based fault detection. In this case, a
fault free system is modeled as a parametric uncertain system whose parameters belong to a given bounded set. The performance boundary of a fault free
system can be acquired by using the proposed approach and then employed
as an adaptive threshold. A fault is defined when system parameters do not
belong to the set due to malfunction or degradation. Once such a fault occurs, the monitored system performance will extend beyond the normal system
boundary predicted.Mechanical Engineerin
Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques
A Review of Bayesian Methods in Electronic Design Automation
The utilization of Bayesian methods has been widely acknowledged as a viable
solution for tackling various challenges in electronic integrated circuit (IC)
design under stochastic process variation, including circuit performance
modeling, yield/failure rate estimation, and circuit optimization. As the
post-Moore era brings about new technologies (such as silicon photonics and
quantum circuits), many of the associated issues there are similar to those
encountered in electronic IC design and can be addressed using Bayesian
methods. Motivated by this observation, we present a comprehensive review of
Bayesian methods in electronic design automation (EDA). By doing so, we hope to
equip researchers and designers with the ability to apply Bayesian methods in
solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which
can be sent to [email protected]
Susceptible Workload Evaluation and Protection using Selective Fault Tolerance
Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%
Artificial Intelligence in Process Engineering
In recent years, the field of Artificial Intelligence (AI) is experiencing a boom, caused by recent breakthroughs in computing power, AI techniques, and software architectures. Among the many fields being impacted by this paradigm shift, process engineering has experienced the benefits caused by AI. However, the published methods and applications in process engineering are diverse, and there is still much unexploited potential. Herein, the goal of providing a systematic overview of the current state of AI and its applications in process engineering is discussed. Current applications are described and classified according to a broader systematic. Current techniques, types of AI as well as pre- and postprocessing will be examined similarly and assigned to the previously discussed applications. Given the importance of mechanistic models in process engineering as opposed to the pure black box nature of most of AI, reverse engineering strategies as well as hybrid modeling will be highlighted. Furthermore, a holistic strategy will be formulated for the application of the current state of AI in process engineering
- …