1,006 research outputs found
Yield and Reliability Analysis for Nanoelectronics
As technology has continued to advance and more break-through emerge, semiconductor devices with dimensions in nanometers have entered into all spheres of our lives. Accordingly, high reliability and high yield are very much a central concern to guarantee the advancement and utilization of nanoelectronic products. However, there appear to be some major challenges related to nanoelectronics in regard to the field of reliability: identification of the failure mechanisms, enhancement of the low yields of nano products, and management of the scarcity and secrecy of available data [34]. Therefore, this dissertation investigates four issues related to the yield and reliability of nanoelectronics.
Yield and reliability of nanoelectronics are affected by defects generated in the manufacturing processes. An automatic method using model-based clustering has been developed to detect the defect clusters and identify their patterns where the distribution of the clustered defects is modeled by a new mixture distribution of multivariate normal distributions and principal curves. The new mixture model is capable of modeling defect clusters with amorphous, curvilinear, and linear patterns. We evaluate the proposed method using both simulated and experimental data and promising results have been obtained.
Yield is one of the most important performance indexes for measuring the success of nano fabrication and manufacturing. Accurate yield estimation and prediction is essential for evaluating productivity and estimating production cost. This research studies advanced yield modeling approaches which consider the spatial variations of defects or defect counts. Results from real wafer map data show that the new yield models provide significant improvement in yield estimation compared to the traditional Poisson model and negative binomial model.
The ultra-thin SiO2 is a major factor limiting the scaling of semiconductor devices. High-k gate dielectric materials such as HfO2 will replace SiO2 in future generations of MOS devices. This study investigates the two-step breakdown mechanisms and breakdown sequences of double-layered high-k gate stacks by monitoring the relaxation of the dielectric films.
The hazard rate is a widely used metric for measuring the reliability of electronic products. This dissertation studies the hazard rate function of gate dielectrics breakdown. A physically feasible failure time distribution is used to model the time-to-breakdown data and a Bayesian approach is adopted in the statistical analysis
Bayesian Analysis for Cardiovascular Risk Factors in Ischemic Heart Disease
Ischemic heart disease (or Coronary Artery Disease) is the most common cause of death in various countries, characterized by reduced blood supply to the heart. Statistical models make an impact in evaluating the risk factors that are responsible for mortality and morbidity during IHD (Ischemic heart disease). In general, geometric or Poisson distributions can underestimate the zero-count probability and hence make it difficult to identify significant effects of covariates for improving conditions of heart disease due to regional wall motion abnormalities. In this work, a flexible class of zero inflated models is introduced. A Bayesian estimation method is developed as an alternative to traditionally used maximum likelihood-based methods to analyze such data. Simulation studies show that the proposed method has a better small sample performance than the classical method, with tighter interval estimates and better coverage probabilities. Although the prevention of CAD has long been a focus of public health policy, clinical medicine, and biomedical scientific investigation, the prevalence of CAD remains high despite current strategies for prevention and treatment. Various comprehensive searches have been performed in the MEDLINE, HealthSTAR, and Global Health databases for providing insights into the effects of traditional and emerging risk factors of CAD. A real-life data set is illustrated for the proposed method using WinBUGS.This research was funded by the Spanish Government for its support through grant RTI2018-094336-B-100 (MCIU/AEI/FEDER, UE) and to the Basque Government for its support through grant IT1207-19
Investigative model of rail accident and incident causes using statistical modelling approach
Nowadays, railway transportation becomes a popular choice among commuter as
transportation mode to travel from one place to another. Thus, it makes the industry
grows faster especially at urban area. The complexity of rail network required high
level of safety features to prevent any interruption. For that purpose, this thesis will
show a proper procedure on how the prediction model of accident need to be
conducted using regression model. From the root cause analysis, the most contributory
factor to influence the accident can be identified. “Ishikawa diagram” is a popular tool
to identify problem occurring from the root where it begins. Process of identifying
required bundles of accident and incident investigation report at least for 5 years and
this thesis used data starting from 1999 to 2014. It was taken from several sources on
Australian Railways website. Analysis from Ishikawa shows there are ten main factors
involved to influences an accident. Those factors are “train driver mistake”, “other’s
human mistake”, “weather influence”, “track problem”, “train problem”, “signaling
error”, “maintenance error”, “communication error”, “procedure error”, and “others”.
Each factor with positive correlation coefficient value to the type of accident and
incident were taken as parameter. Then, before completing the prediction model
formula, some of hypothesis needs to be tested to know which model among
regression model is suitable and give a better prediction result. Dispersion test is a test
to calculate dispersion value to know either data is under dispersion for value less than
1 (Poisson model is appropriate) or over dispersion for value more than 1 (Negative
binomial is appropriate). Then, Vuong test is applied to measure which model has a
better result between those two models. From the hypothesis, this thesis shows that
Zero-inflated model is the most fitted model to predict accident and incident cases of
collision, derailment and SPAD. In some country, they may have different system of
rail and geography, thus it should have different possibilities to influence accident and
incident. However, this method and procedure are available to use for them to identify
and predict the most influencing factor that contributes to the accident occurrences
A Comprehensive Analysis of Proportional Intensity-based Software Reliability Models with Covariates (New Developments on Mathematical Decision Making Under Uncertainty)
The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant development metrics data observed in the development process. In this paper we develop proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the timedependent covariate as well as the software fault data. The resulting models are similar to the usual proportional hazard model, but possess somewhat different covariate structure from the existing one. We compare these metricsbased software reliability models with eleven well-known non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit and prediction. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating the time-dependent metrics data in modeling
An Empirical Validation of Object-Oriented Design Metrics for Fault Prediction
Object-oriented design has become a dominant method in software industry and many design metrics of object-oriented programs have been proposed for quality prediction, but there is no well-accepted statement on how significant those metrics are. In this study, empirical analysis is carried out to validate object-oriented design metrics for defects estimation. Approach: The Chidamber and Kemerer metrics suite is adopted to estimate the number of defects in the programs, which are extracted from a public NASA data set. The techniques involved are statistical analysis and neuro-fuzzy approach. Results: The results indicate that SLOC, WMC, CBO and RFC are reliable metrics for defect estimation. Overall, SLOC imposes most significant impact on the number of defects. Conclusions/Recommendations: The design metrics are closely related to the number of defects in OO classes, but we can not jump to a conclusion by using one analysis technique. We recommend using neuro-fuzzy approach together with statistical techniques to reveal the relationship between metrics and dependent variables, and the correlations among those metrics also have to be considered
- …