4,890 research outputs found

    Warranty Data Analysis: A Review

    Get PDF
    Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA

    Reasoning about the Reliability of Diverse Two-Channel Systems in which One Channel is "Possibly Perfect"

    Get PDF
    This paper considers the problem of reasoning about the reliability of fault-tolerant systems with two "channels" (i.e., components) of which one, A, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of "perfection." We begin with the case where either channel can bring the system to a safe state. We show that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA.pB. That is, there is conditional independence between the events "A fails" and "B is imperfect." The second step of the reasoning involves epistemic uncertainty about (pA, pB) and we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters. We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft

    Finding the direction of disturbance propagation in a chemical process using transfer entropy

    No full text
    Published versio

    Standardization of multivariate Gaussian mixture models and background adjustment of PET images in brain oncology

    Full text link
    In brain oncology, it is routine to evaluate the progress or remission of the disease based on the differences between a pre-treatment and a post-treatment Positron Emission Tomography (PET) scan. Background adjustment is necessary to reduce confounding by tissue-dependent changes not related to the disease. When modeling the voxel intensities for the two scans as a bivariate Gaussian mixture, background adjustment translates into standardizing the mixture at each voxel, while tumor lesions present themselves as outliers to be detected. In this paper, we address the question of how to standardize the mixture to a standard multivariate normal distribution, so that the outliers (i.e., tumor lesions) can be detected using a statistical test. We show theoretically and numerically that the tail distribution of the standardized scores is favorably close to standard normal in a wide range of scenarios while being conservative at the tails, validating voxelwise hypothesis testing based on standardized scores. To address standardization in spatially heterogeneous image data, we propose a spatial and robust multivariate expectation-maximization (EM) algorithm, where prior class membership probabilities are provided by transformation of spatial probability template maps and the estimation of the class mean and covariances are robust to outliers. Simulations in both univariate and bivariate cases suggest that standardized scores with soft assignment have tail probabilities that are either very close to or more conservative than standard normal. The proposed methods are applied to a real data set from a PET phantom experiment, yet they are generic and can be used in other contexts

    FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods

    Mapiranje osjetljivosti na klizišta na temelju GIS-a korištenjem strojnog učenja i procjene alternativnih šumskih putova u zaštitnim šumama

    Get PDF
    Forestry activities should be carried out within the purview of sustainable forestry while reaping the benefits of forestry. Accordingly, the construction of forest roads through forests should be carefully planned, especially in protection forests. Forest areas in Turkey are generally widespread in mountainous and high sloping areas that are susceptible to landslides-landslide susceptibility is one of the most important criteria for the selection of protected forests. As such, it is important to evaluate detailed and applicable alternatives regarding special areas and private forests. The aim of this study is to determine alternative routes for forest roads in protected forests through the use of geographic information systems (GIS), particularly in areas with high landslide susceptibility. To this end, a landslide susceptibility map (LSM) was created using logistic regression (LR) and random forest (RF) modeling methods, which are widely used in machine learning (ML). Two models with the highest receiver operating characteristic (ROC) and area under curve (AUC) values were selected, and ten factors (slope, elevation, lithology, distance to road, distance to fault, distance to river, curvature, stream power index, topographic position index, and topographic wetness index) were used. The best LSM modeling method was AUC. The AUC value was 90.6% with the RF approach and 80.3% with the LR approach. The generated LSMs were used to determine alternative routes that were calculated through cost path analysis. It is hoped that the susceptibility to landslides and selection of alternative forest road routes determined through the approaches and techniques in this study will benefit forest road planning as well as plan and decision makers.Šumarske aktivnosti treba provoditi u okviru održivog šumarstva, dok se ubiru blagodati šumarstva. U skladu s tim, izgradnju cesta kroz šume treba pažljivo planirati, posebno u zaštitnim šumama. Šumska područja u Turskoj općenito su široko rasprostranjena u planinskim i visoko nagnutim područjima koja su osjetljiva na klizišta – osjetljivost na klizišta jedan je od najvažnijih kriterija za odabir zaštićenih šuma. Kao takvo, važno je procijeniti detaljne i primjenjive alternative u pogledu posebnih područja i privatnih šuma. Cilj ovoga istraživanja je utvrditi alternativne pravce za šumske ceste u zaštićenim šumama korištenjem geografskih informacijskih sustava (GIS), posebno u područjima s velikom osjetljivošću na klizišta. U tu svrhu izrađena je karta osjetljivosti na klizišta (LSM) korištenjem metoda modeliranja logističke regresije (LR) i slučajnih šuma (RF), koje se široko koriste u strojnom učenju (ML). Odabrana su dva modela s najvišim radnim karakteristikama prijamnika (ROC) i površinom ispod krivulje (AUC), te deset čimbenika (nagib, nadmorska visina, litologija, udaljenost od ceste, udaljenost do greške, udaljenost od rijeke, zakrivljenost, indeks snage struje, korišteni su indeks topografskog položaja i indeks vlažnosti topografije). Najbolja metoda modeliranja LSM bila je AUC. Vrijednost AUC bila je 90,6% s RF pristupom i 80,3% s LR pristupom. Stvoreni LSM-ovi korišteni su za određivanje alternativnih putova koji izračunavaju analizu putanja troškova. Nadamo se da će osjetljivost na klizišta i odabir alternativnih putnih pravaca šumskih puteva utvrđenih pristupima i tehnikama u ovoj studiji biti od koristi planiranju šumskih cesta, kao i donositeljima planova i odluka

    Wind turbine condition monitoring strategy through multiway PCA and multivariate inference

    Get PDF
    This article states a condition monitoring strategy for wind turbines using a statistical data-driven modeling approach by means of supervisory control and data acquisition (SCADA) data. Initially, a baseline data-based model is obtained from the healthy wind turbine by means of multiway principal component analysis (MPCA). Then, when the wind turbine is monitorized, new data is acquired and projected into the baseline MPCA model space. The acquired SCADA data are treated as a random process given the random nature of the turbulent wind. The objective is to decide if the multivariate distribution that is obtained from the wind turbine to be analyzed (healthy or not) is related to the baseline one. To achieve this goal, a test for the equality of population means is performed. Finally, the results of the test can determine that the hypothesis is rejected (and the wind turbine is faulty) or that there is no evidence to suggest that the two means are different, so the wind turbine can be considered as healthy. The methodology is evaluated on a wind turbine fault detection benchmark that uses a 5 MW high-fidelity wind turbine model and a set of eight realistic fault scenarios. It is noteworthy that the results, for the presented methodology, show that for a wide range of significance, a in [1%, 13%], the percentage of correct decisions is kept at 100%; thus it is a promising tool for real-time wind turbine condition monitoring.Peer ReviewedPostprint (published version

    Finite Bivariate and Multivariate Beta Mixture Models Learning and Applications

    Get PDF
    Finite mixture models have been revealed to provide flexibility for data clustering. They have demonstrated high competence and potential to capture hidden structure in data. Modern technological progresses, growing volumes and varieties of generated data, revolutionized computers and other related factors are contributing to produce large scale data. This fact enhances the significance of finding reliable and adaptable models which can analyze bigger, more complex data to identify latent patterns, deliver faster and more accurate results and make decisions with minimal human interaction. Adopting the finest and most accurate distribution that appropriately represents the mixture components is critical. The most widely adopted generative model has been the Gaussian mixture. In numerous real-world applications, however, when the nature and structure of data are non-Gaussian, this modelling fails. One of the other crucial issues when using mixtures is determination of the model complexity or number of mixture components. Minimum message length (MML) is one of the main techniques in frequentist frameworks to tackle this challenging issue. In this work, we have designed and implemented a finite mixture model, using the bivariate and multivariate Beta distributions for cluster analysis and demonstrated its flexibility in describing the intrinsic characteristics of the observed data. In addition, we have applied our estimation and model selection algorithms to synthetic and real datasets. Most importantly, we considered interesting applications such as in image segmentation, software modules defect prediction, spam detection and occupancy estimation in smart buildings
    corecore