34 research outputs found

    Identification of initial fault time for bearing based on monitoring indicator, WEMD and Infogram

    Get PDF
    Rolling element bearing is a core component in the rotating machine. The performance of the whole machine is mainly dominated by the performance condition of the rolling element bearing. The Initial Fault Time (IFT) is a beginning landmark of the unhealthy condition of bearings. In order to identify accurately and rapidly the IFT under the weak fault signatures and heavy background noise, an identification method of the IFT is proposed by the monitoring indicator and envelope analysis with Weighted Empirical Mode Decomposition (WEMD) and Infogram. The monitoring indicator is constructed by the variation coefficient of the summation of the multiple standardized statistical features of the vibration signal. The approximate IFT can be obtained by the minimum before the early stage of the continuous increase in the monitoring indicator. Whereafter, a more accurate IFT can be detected by envelope analysis with WEMD and Infogram based on interval-halving backtracking strategy. The proposed method is verified by the tested dataset provided by Intelligent Maintenance System (IMS). The results show that the proposed method is efficient, rapid and simple for identifying the IFT

    Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving

    Get PDF
    Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries of objects and classify the identified objects, helping autonomous cars to recognise the road environment and the traffic situation. Not only is this task safety critical, but developing a DNN based object segmentation module presents a set of challenges that are significantly different from traditional development of safety critical software. The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation. Among these stages, training and evaluation are computation intensive while data collection and labelling are manual labour intensive. This paper shows how development of DNN based object segmentation can be improved by exploiting the correlation between Surprise Adequacy (SA) and model performance. The correlation allows us to predict model performance for inputs without manually labelling them. This, in turn, enables understanding of model performance, more guided data collection, and informed decisions about further training. In our industrial case study the technique allows cost savings of up to 50% with negligible evaluation inaccuracy. Furthermore, engineers can trade off cost savings versus the tolerable level of inaccuracy depending on different development phases and scenarios.Comment: to be published in Proceedings of the 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineerin

    Reducing DNN labelling cost using surprise adequacy: An industrial case study for autonomous driving

    Get PDF
    Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries of objects and classify the identified objects, helping autonomous cars to recognise the road environment and the traffic situation. Not only is this task safety critical, but developing a DNN based object segmentation module presents a set of challenges that are significantly different from traditional development of safety critical software. The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation. Among these stages, training and evaluation are computation intensive while data collection and labelling are manual labour intensive. This paper shows how development of DNN based object segmentation can be improved by exploiting the correlation between Surprise Adequacy (SA) and model performance. The correlation allows us to predict model performance for inputs without manually labelling them. This, in turn, enables understanding of model performance, more guided data collection, and informed decisions about further training. In our industrial case study the technique allows cost savings of up to 50% with negligible evaluation inaccuracy. Furthermore, engineers can trade off cost savings versus the tolerable level of inaccuracy depending on different development phases and scenarios

    Robust engineering in maintainability of building

    Get PDF
    The process of designing a building is dependent on many requirements. Maintainability is an important design aspect that will affect the cost for management and the maintenance of a building within its expected life cycle. As an effect, there is now a need for a multidimensional diagnosis system that integrates maintainability that in accounting user’s environment and other design elements. However, in Malaysia, building maintainability is getting less attention and neglected as more focus is given on constructability and compliance with current regulations and law. Meeting up with this challenges, this study establishes a model that integrates maintainability as an important principle during the designing process using Robust Engineering (RE) principles that captures the interaction between the design elements with the user environment. The study then seeks 1) to evaluate current limitation of the design process in capturing the maintenance requirements; 2) to evaluate the potential of using Robust Engineering principles to capture maintainability consideration in building design; 3) to examine structural relationship between maintainability consideration and high maintainability building for a robust design outcome, and 4) to develop Robust Maintainability Integrated Design (R-MInD) guideline that evaluate maintainability incorporation at the design stage. Concentrating on a single function building usage (i.e. educational institution buildings), the study had utilised Partial Least Square Structural Equation Modelling technique to identify the influencing factors to improve the maintainability incorporation in the designing process. A total of eleven (n=11) experts ranging from designer, project manager, company director and facility managers from the government and private sectors were interviewed, while one-hundred and eleven (n=111) respondents were accounted in a survey to evaluate the current practice to propose improvement in building design practice. From the study, it has been established that there is a positive correlation between conformance and compliance with regulations and standards, integration of systems, space planning and materials and equipment selection for robust maintainability building design. Furthermore, the study had also found that RE principle is suitable to be incorporated during the designing process to improve building’s maintainability. The study further suggests a new process model and guidelines that can be adopted by the building designer that may improve the maintainability of a building. In conclusion, the findings of this research revealed that a realistic maintainability evaluation during the designing process depends on a complex system and subsystem consisting of many materials and equipments

    A Hierarchical, Fuzzy Inference Approach to Data Filtration and Feature Prioritization in the Connected Manufacturing Enterprise

    Get PDF
    The current big data landscape is one such that the technology and capability to capture and storage of data has preceded and outpaced the corresponding capability to analyze and interpret it. This has led naturally to the development of elegant and powerful algorithms for data mining, machine learning, and artificial intelligence to harness the potential of the big data environment. A competing reality, however, is that limitations exist in how and to what extent human beings can process complex information. The convergence of these realities is a tension between the technical sophistication or elegance of a solution and its transparency or interpretability by the human data scientist or decision maker. This dissertation, contextualized in the connected manufacturing enterprise, presents an original Fuzzy Approach to Feature Reduction and Prioritization (FAFRAP) approach that is designed to assist the data scientist in filtering and prioritizing data for inclusion in supervised machine learning models. A set of sequential filters reduces the initial set of independent variables, and a fuzzy inference system outputs a crisp numeric value associated with each feature to rank order and prioritize for inclusion in model training. Additionally, the fuzzy inference system outputs a descriptive label to assist in the interpretation of the feature’s usefulness with respect to the problem of interest. Model testing is performed using three publicly available datasets from an online machine learning data repository and later applied to a case study in electronic assembly manufacture. Consistency of model results is experimentally verified using Fisher’s Exact Test, and results of filtered models are compared to results obtained by the unfiltered sets of features using a proposed novel metric of performance-size ratio (PSR)

    Combining adaptive and designed statistical experimentation : process improvement, data classification, experimental optimization and model building

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references.Research interest in the use of adaptive experimentation has returned recently. This historic technique adapts and learns from each experimental run but requires quick runs and large effects. The basis of this renewed interest is to improve experimental response and it is supported by fast, deterministic computer experiments and better post-experiment data analysis. The unifying concept of this thesis is to present and evaluate new ways of using adaptive experimentation combined with the traditional statistical experiment. The first application uses an adaptive experiment as a preliminary step to a more traditional experimental design. This provides experimental redundancy as well as greater model robustness. The number of extra runs is minimal because some are common and yet both methods provide estimates of the best setting. The second use of adaptive experimentation is in evolutionary operation. During regular system operation small, nearly unnoticeable, variable changes can be used to improve production dynamically. If these small changes follow an adaptive procedure there is high likelihood of improvement and integrating into the larger process development. Outside of the experimentation framework the adaptive procedure is shown to combine with other procedures and yield benefit. Two examples used here are an unconstrained numerical optimization procedure as well as classification parameter selection. The final area of new application is to create models that are a combination of an adaptive experiment with a traditional statistical experiment.(cont.) Two distinct areas are examined, first, the use of the adaptive experiment to determine the covariance structure, and second, the direct incorporation of both data sets in an augmented model. Both of these applications are Bayesian with a heavy reliance on numerical computation and simulation to determine the combined model. The two experiments investigated could be performed on the same physical or analytical model but are also extended to situations with different fidelity models. The potential for including non-analytical, even human, models is also discussed. The evaluative portion of this thesis begins with an analytic foundation that outlines the usefulness as well as the limitations of the procedure. This is followed by a demonstration using a simulated model and finally specific examples are drawn from the literature and reworked using the method. The utility of the final result is to provide a foundation to integrate adaptive experimentation with traditional designed experiments. Giving industrial practitioners a solid background and demonstrated foundation should help to codify this integration. The final procedures represent a minimal departure from current practice but represent significant modeling and analysis improvement.by Chad Ryan Foster.Sc.D
    corecore