136 research outputs found

    Piecewise deterministic Markov process for condition-based imperfect maintenance models

    Full text link
    In this paper, a condition-based imperfect maintenance model based on piecewise deterministic Markov process (PDMP) is constructed. The degradation of the system includes two types: natural degradation and random shocks. The natural degradation is deterministic and can be nonlinear. The damage increment caused by a random shock follows a certain distribution, and its parameters are related to the degradation state. Maintenance methods include corrective maintenance and imperfect maintenance. Imperfect maintenance reduces the degradation degree of the system according to a random proportion. The maintenance action is delayed, and the system will suffer natural degradations and random shocks while waiting for maintenance. At each inspection time, the decision-maker needs to make a choice among planning no maintenance, imperfect maintenance and perfect maintenance, so as to minimize the total discounted cost of the system. The impulse optimal control theory of PDMP is used to determine the optimal maintenance strategy. A numerical study dealing with component coating maintenance problem is presented. Relationship with optimal threshold strategy is discussed. Sensitivity analyses on the influences of discount factor, observation interval and maintenance cost to the discounted cost and optimal actions are presented.Comment: 34 pages, 28 figure

    Unsupervised Domain Adaptation with Progressive Adaptation of Subspaces

    Full text link
    Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source domain with domain shift. Most of the existing UDA methods try to mitigate the adverse impact induced by the shift via reducing domain discrepancy. However, such approaches easily suffer a notorious mode collapse issue due to the lack of labels in target domain. Naturally, one of the effective ways to mitigate this issue is to reliably estimate the pseudo labels for target domain, which itself is hard. To overcome this, we propose a novel UDA method named Progressive Adaptation of Subspaces approach (PAS) in which we utilize such an intuition that appears much reasonable to gradually obtain reliable pseudo labels. Speci fically, we progressively and steadily refine the shared subspaces as bridge of knowledge transfer by adaptively anchoring/selecting and leveraging those target samples with reliable pseudo labels. Subsequently, the refined subspaces can in turn provide more reliable pseudo-labels of the target domain, making the mode collapse highly mitigated. Our thorough evaluation demonstrates that PAS is not only effective for common UDA, but also outperforms the state-of-the arts for more challenging Partial Domain Adaptation (PDA) situation, where the source label set subsumes the target one

    Learning Capacity: A Measure of the Effective Dimensionality of a Model

    Full text link
    We exploit a formal correspondence between thermodynamics and inference, where the number of samples can be thought of as the inverse temperature, to define a "learning capacity'' which is a measure of the effective dimensionality of a model. We show that the learning capacity is a tiny fraction of the number of parameters for many deep networks trained on typical datasets, depends upon the number of samples used for training, and is numerically consistent with notions of capacity obtained from the PAC-Bayesian framework. The test error as a function of the learning capacity does not exhibit double descent. We show that the learning capacity of a model saturates at very small and very large sample sizes; this provides guidelines, as to whether one should procure more data or whether one should search for new architectures, to improve performance. We show how the learning capacity can be used to understand the effective dimensionality, even for non-parametric models such as random forests and kk-nearest neighbor classifiers
    • …
    corecore