3 research outputs found
Data driven predictive model to compact a production stop-on-fail test set for an electronic device
Decision Tree is a popular machine learning algorithm used for fault detection and classification in the industry. In this paper, the modelling technique is used to compact a production test set defined for quality assurance of an electronic asset. The novelty of this work is in the proposed method that builds in an iterative way decision trees until an accurate predictive model that meets classification accuracy target in a stop-on-fail test scenario. Generated test data is characterized with missing values which is a major challenge to the traditional use of decision trees. The developed computational procedure handles this application-specific data attribute. Exemplary results show that the method is able to significantly reduce a production test set with parametric and non-parametric tests, and generate a truthful prognostic model. In addition, the method is computationally efficient and easy to implement. It could also be combined with another test compaction strategies such as variables association analysis. Furthermore, the method proposed offers the flexibility of exploring the trade-off between the number of removed tests from the production test set and the prediction accuracy. The results can enable production costs reduction without impacting quality detection accuracy. The paper details and provides discussions on the advantages and limitations of the proposed algorithm
Test Data Analytics — Exploring Spatial and Test-Item Correlations in Production Test Data
Abstract—The discovery of patterns and correlations hidden in the test data could help reduce test time and cost. In this paper, we propose a methodology and supporting statistical regression tools that can exploit and utilize both spatial and inter-test-item correlations in the test data for test time and cost reduction. We first describe a statistical regression method, called group lasso, which can identify inter-test-item correlations from test data. After learning such correlations, some test items can be identified for removal from the test program without compromising test quality. An extended version of this method, weighted group lasso, allows taking into account the distinct test time/cost of each individual test item in the formulation as a weighted optimization problem. As a result, its solution would favor more costly test items for removal from the test program. We further integrate weighted group lasso with another statistical regression technique, virtual probe, which can learn spatial correlations of test data across a wafer. The integrated method could then utilize both spatial and inter-test-item correlations to maximize the number of test items whose values can be predicted without measurement. Experimental results of a high-volume industrial device show that utilizing both spatial and inter-test-item correlations can help reduce test time by up to 55%. I
Recommended from our members
Data Analytics in Test: Recognizing and Reducing Subjectivity
Applying data analytics in production test has become a widely adopted industrial practice in recent years. As the complexity of semiconductor devices scales and the amounts of available test data continue to grow, the research direction in this field is forced to shift away from solving specific problems with ad hoc approaches and demands for deeper understanding of the fundamental issues. Two data-driven test applications where this shift is apparent are production yield optimization and defect screening, where the respective underlying data analytics approaches are correlation analysis and outlier analysis. A core issue present in these two approaches stems from the subjectivity that is inherent to data analytics. This dissertation delves into how subjectivity manifests itself and what can be done to reduce it with respect to the two test applications.Outlier analysis is an approach used for identifying anomalies. The main goal of outlier analysis in test is to capture statistically outlying parts with the hope that their abnormal behavior is attributed to some defectivity. During creation of an outlier model, the decisions about outlying behavior in the existing data are made by utilizing known failures and the test engineer's best judgment. In practice, outlier screening methods are simply used for transforming data into an outlier score space. Even if outlier analysis techniques are able to successfully classify a dataset into inliers and outliers, outlier models require thresholds to be decided. A concept called Consistency is introduced to provide an objective data-driven way to evaluate outlier models by utilizing all available data. The key observation underlying this concept is that outlier analysis should be immune to noise introduced by sources of systematic variation.Correlation analysis is a process comprising a search for related variables. The application of production yield optimization involves searching for correlation between the yield and various controllable parameters. The goal of this process is to uncover parameters that, when adjusted, can result in yield improvement. This analytics process is subjective to the perspective of the analyst and the quality of the result is highly dependent on the analyst’s previous experiences. In order to reduce the subjectivity in this application, a process mining methodology is introduced to learn from the experiences of analysts. The key advantage of this methodology is that in addition to having the capability to record and reproduce these analyses, it can also generalize to analytics processes not contained in the learned experiences