1,052 research outputs found

    Production trend identification and forecast for shop-floor business intelligence

    Get PDF
    The paper introduces a methodology to define production trend classes and also the results to serve with trend prognosis in a given manufacturing situation. The prognosis is valid for one, selected production measure (e.g. a quality dimension of one product, like diameters, angles, surface roughness, pressure, basis position, etc.) but the applied model takes into account the past values of many other, related production data collected typically on the shop-floor, too. Consequently, it is useful in batch or (customized) mass production environments. The proposed solution is applicable to realize production control inside the tolerance limits to proactively avoid the production process going outside from the given upper and lower tolerance limits. The solution was developed and validated on real data collected on the shop-floor; the paper also summarizes the validated application results of the proposed methodology. © 2016, IMEKO-International Measurement Federation Secretariat. All rights reserved

    A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    Get PDF
    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed IMFs and residuals are the final prediction results. Six kinds of prediction models are compared, which are DBN prediction model, EMD-DBN prediction model, EEMD-DBN prediction model, CEEMD-DBN prediction model, ESMD-DBN prediction model, and the proposed model in this paper. The same sunspots time series are predicted with six kinds of prediction models. The experimental results show that the proposed model has better prediction accuracy and smaller error

    An improved extreme-point symmetric mode decomposition method and its application to rolling bearing fault diagnosis

    Get PDF
    HHT (Hilbert-Huang Transform) which consist of EMD (Empirical Mode Decomposition) and HT (Hilbert Transform) now is the most widely used time-frequency analysis technique for rolling element bearing fault diagnosis, however, its fault characteristic information extraction accuracy is usually limited due to the problem of mode mixing in EMD. ESMD (Extreme-point symmetric mode decomposition) is a novel development of HHT which is promising to alleviate this limitation and it has been applied to some fields successfully, but its application for rolling bearing fault diagnosis has rarely been seen in the literature. In this paper, ESMD is applied to extract the bearing fault characteristics for rolling bearing fault detection, and the results proved that ESMD can have a better fault diagnose effect than EMD and HT. What’s more, for further improving bearing fault characteristic extraction accuracy of rolling bearing vibration signals, the sifting scheme is proposed for selecting the sensitive fault-related IMFs (intrinsic mode functions) generated by ESMD, in which a weighted kurtosis index is introduced for automatic selection and reconstruction of the fault-related IMFs, and then the original and reconstructed bearing fault vibration signal after performing Hilbert transform as the results to diagnose the incipient rolling bearing fault. ESMD combined with the proposed sifting scheme are applied to diagnose the simulated and experimental signals, and the results confirmed that the sifting scheme based ESMD is superior to the other conventional method in rolling bearings fault diagnosis

    Novel chemometric approaches towards handling biospectroscopy datasets

    Get PDF
    Chemometrics allows one to identify chemical patterns using spectrochemical information of biological materials, such as tissues and biofluids. This has fundamental importance to overcome limitations in traditional bioanalytical analysis, such as the need for laborious and extreme invasive procedures, high consumption of reagents, and expensive instrumentation. In biospectroscopy, a beam of light, usually in the infrared region, is projected onto the surface of a biological sample and, as a result, a chemical signature is generated containing the vibrational information of most of the molecules in that material. This can be performed in a single-spectra or hyperspectral imaging fashion, where a resultant spectrum is generated for each position (pixel) in the surface of a biological material segment, hence, allowing extraction of both spatial and spectrochemical information simultaneously. As an advantage, these methodologies are non-destructive, have a relatively low-cost, and require minimum sample preparation. However, in biospectroscopy, large datasets containing complex spectrochemical signatures are generated. These datasets are processed by computational tools in order to solve their signal complexity and then provide useful information that can be used for decision taking, such as the identification of clustering patterns distinguishing disease from healthy controls samples; differentiation of tumour grades; prediction of unknown samples categories; or identification of key molecular fragments (biomarkers) associated with the appearance of certain diseases, such as cancer. In this PhD thesis, new computational tools are developed in order to improve the processing of bio-spectrochemical data, providing better clinical outcomes for both spectral and hyperspectral datasets

    A Design Process Centric Application of State Space Modeling as a Function of Communications and Cognitive Skills Assessments.

    Full text link
    Humans have a reliable basic probabilistic intuition. We utilize our probabilistic intuition in many day-to-day activities such as driving. In fact any interaction that occurs in the presence of other independent actors requires some probabilistic assessment. While we are good at sorting between rare and common events, determining if these events are statistical significant is always subject to scrutiny. Quite often the bounds of statistical significance are at ends with the ‘common sense’ expectation. While our probabilistic intuition is good for first moment effects such as driving a car, throwing a football and understanding simplistic mathematical models, our probabilistic intuition fails when we need to evaluate secondary effects such as high speed turns, playing golf or understanding complex mathematical models. When our probabilistic intuition is challenged misinterpretation of results and skewed perspectives of possible outcomes will occur. The work presented in this dissertation provides a mathematical formulation that will provide a guide to when our probabilistic intuition will be challenged. This dissertation will discuss the development of the Process Failure Estimation Technique (ProFET). A multitude of potential team parameters could have been selected, interpersonal communication effectiveness and cognitive skill assessments seemed the most obvious first steps. This is due to the prolific discussion on communication and the general acceptance of the cognitive testing as an indicator of performance potential. The teams skill set must be variable with respect to time in order to accomplish the required objectives of each phase of the design process. ProFET develops a metric for the design process that is sensitive to the team composition and structure. This metric is applied to a domain that is traditionally devoid of objective scoring. With the use of ProFET more informed decisions on team structure and composition can be made at critical junctions of the design process. Specifically, ProFET looks at how variability propagates through the design activities as opposed to attempting to quantify the actual values of design activities, which is the focus of the majority of other design research.PhDNaval Architecture and Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116679/1/jdstrick_1.pd

    Advanced Fault Diagnosis and Health Monitoring Techniques for Complex Engineering Systems

    Get PDF
    Over the last few decades, the field of fault diagnostics and structural health management has been experiencing rapid developments. The reliability, availability, and safety of engineering systems can be significantly improved by implementing multifaceted strategies of in situ diagnostics and prognostics. With the development of intelligence algorithms, smart sensors, and advanced data collection and modeling techniques, this challenging research area has been receiving ever-increasing attention in both fundamental research and engineering applications. This has been strongly supported by the extensive applications ranging from aerospace, automotive, transport, manufacturing, and processing industries to defense and infrastructure industries

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
    corecore