13,814 research outputs found

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A non-Gaussian continuous state space model for asset degradation

    Get PDF
    The degradation model plays an essential role in asset life prediction and condition based maintenance. Various degradation models have been proposed. Within these models, the state space model has the ability to combine degradation data and failure event data. The state space model is also an effective approach to deal with the multiple observations and missing data issues. Using the state space degradation model, the deterioration process of assets is presented by a system state process which can be revealed by a sequence of observations. Current research largely assumes that the underlying system development process is discrete in time or states. Although some models have been developed to consider continuous time and space, these state space models are based on the Wiener process with the Gaussian assumption. This paper proposes a Gamma-based state space degradation model in order to remove the Gaussian assumption. Both condition monitoring observations and failure events are considered in the model so as to improve the accuracy of asset life prediction. A simulation study is carried out to illustrate the application procedure of the proposed model

    Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model

    Full text link
    In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products' lifetime distribution

    Oscillator Phase Noise and Small-Scale Channel Fading in Higher Frequency Bands

    Get PDF
    This paper investigates the effect of oscillator phase noise and channel variations due to fading on the performance of communication systems at frequency bands higher than 10GHz. Phase noise and channel models are reviewed and technology-dependent bounds on the phase noise quality of radio oscillators are presented. Our study shows that, in general, both channel variations and phase noise can have severe effects on the system performance at high frequencies. Importantly, their relative severity depends on the application scenario and system parameters such as center frequency and bandwidth. Channel variations are seen to be more severe than phase noise when the relative velocity between the transmitter and receiver is high. On the other hand, performance degradation due to phase noise can be more severe when the center frequency is increased and the bandwidth is kept a constant, or when oscillators based on low power CMOS technology are used, as opposed to high power GaN HEMT based oscillators.Comment: IEEE Global Telecommun. Conf. (GLOBECOM), Austin, TX, Dec. 201

    A practical degradation based method to predict long-term moisture incursion and colour change in high power LEDs

    Get PDF
    The effect of relative humidity on LEDs and how the moisture incursion is associated to the color shift is studied. This paper proposes a different approach to describe the lumen degradation of LEDs due to the long-term effects of humidity. Using the lumen degradation data of different types of LEDs under varying conditions of relative humidity, a humidity based degradation model (HBDM) is developed. A practical estimation method from the degradation behaviour is proposed to quantitatively gauge the effect of moisture incursion by means of a humidity index. This index demonstrates a high correlation with the color shift indicated by the LED's yellow to blue output intensity ratio. Physical analyses of the LEDs provide a qualitative validation of the model, which provides good accuracy with longer periods of moisture exposure. The results demonstrate that the HBDM is an effective indicator to predict the extent of the long-term impact of humidity and associated relative color shift

    Effects of Multirate Systems on the Statistical Properties of Random Signals

    Get PDF
    In multirate digital signal processing, we often encounter time-varying linear systems such as decimators, interpolators, and modulators. In many applications, these building blocks are interconnected with linear filters to form more complicated systems. It is often necessary to understand the way in which the statistical behavior of a signal changes as it passes through such systems. While some issues in this context have an obvious answer, the analysis becomes more involved with complicated interconnections. For example, consider this question: if we pass a cyclostationary signal with period K through a fractional sampling rate-changing device (implemented with an interpolator, a nonideal low-pass filter and a decimator), what can we say about the statistical properties of the output? How does the behavior change if the filter is replaced by an ideal low-pass filter? In this paper, we answer questions of this nature. As an application, we consider a new adaptive filtering structure, which is well suited for the identification of band-limited channels. This structure exploits the band-limited nature of the channel, and embeds the adaptive filter into a multirate system. The advantages are that the adaptive filter has a smaller length, and the adaptation as well as the filtering are performed at a lower rate. Using the theory developed in this paper, we show that a matrix adaptive filter (dimension determined by the decimator and interpolator) gives better performance in terms of lower error energy at convergence than a traditional adaptive filter. Even though matrix adaptive filters are, in general, computationally more expensive, they offer a performance bound that can be used as a yardstick to judge more practical "scalar multirate adaptation" schemes
    • 

    corecore