6 research outputs found
Em Approach on Influence Measures in Competing Risks Via Proportional Hazard Regression Model
In a conventional competing risk s model, the time to failure of a particular
experimental unit might be censored and the cause of failure can be known or
unknown. In this thesis the analysis of this particular model was based on the
cause-specific hazard of Cox model. The Expectation Maximization (EM) was
considered to obtain the estimate of the parameters. These estimates were then
compared to the Newton-Raphson iteration method. A generated data where the
failure times were taken as exponentially distributed was used to further compare
these two methods of estimation. From the simulation study for this particular case,
we can conclude that the EM algorithm proved to be more superior in terms of
mean value of parameter estimates, bias and root mean square error. To detect irregularities and peculiarities in the data set, the residuals, Cook
distance and the likelihood distance were computed. Unlike the residuals, the
perturbation method of Cook's distance and the likelihood distance were effective
in the detection of observations that have influenced on the parameter estimates.
We considered both the EM approach and the ordinary maximum likelihood
estimation (MLE) approach in the computation of the influence measurements. For
the ultimate results of influence measurements we utilized the approach of the one step
. The EM one-step and the maximum likelihood (ML) one-step gave
conclusions that are analogous to the full iteration distance measurements. In
comparison, it was found that EM one-step gave better results than the ML one step
with respect to the value of Cook's distance and likelihood distance. It was also
found that Cook's distance i s better than the likelihood distance with respect to the
number of observations detected
Regression analysis of masked competing risks data under cumulative incidence function framework
In the studies that involve competing risks, somehow, masking issues might arise. That is, the cause of failure for some subjects is only known as a subset of possible causes. In this study, a Bayesian analysis is developed to assess the effect of risks factor on the Cumulative Incidence Function (CIF) by adopting the proportional subdistribution hazard model. Simulation is conducted to evaluate the performance of the proposed model and it shows that the model is feasible for the possible applications
Classification of Atoms
Abstract: This article is devoted to give a self-contained presentation of classification of atoms of probability space as equivalent or non-equivalent. It will be established that an event, i.e., a member of a σ-field of a probability space can contains uncountable many equivalent atoms. We will show that the relation of being equivalent atoms is an equivalence relation. An independent proof will enable us to state that an event of a probability space with σ-finite probability measure can contains at most countable many non-equivalent atoms. We will also establish that for a purely atomic probability space with σ-finite probability measure, probability measure of every event is equal to the sum of the probability measures of its non-equivalent atoms. We will also justify that in some of the results, the probability space and respective probability measure can be replaced as measure space and respective measure
Nonlinear finite element analysis of axially crushed cotton fibre composite corrugated tubes
It is proven experimentally that introducing corrugation along a shell generator together with a proper advanced composite material will enhance the crashworthiness performance of energy device units. This is because corrugation along the shell generator will force the initial crushing to occur at a predetermined region along the tube generator. On the other hand, a proper composite material offers vast potential for optimally tailoring a design to meet crashworthiness performance requirements. In this paper, the energy absorption characteristics of cotton fibre/propylene corrugated tubes are numerically studied. Finite element simulation using ABAQUS/Explicit was carried out to examine the effects of parametric modifications on the tube’s energy absorption capability. Results showed that the tube’s energy absorption capability was affected significantly by varying the number of corrugation and aspect ratios. It is found that as the number of corrugations increases, the amount of absorbed energy significantly increases
Parametric model based on imputations techniques for partly interval censored data
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data
(breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function
Cramer-Rao Lower Bound for Parameter Estimation of Multiexponential Signals
The Cramer Rao Lower Bound on the mean square error of unbiased estimators is widely used as a measure of accuracy of parameter estimates obtained from a given data. In
this paper, derivation of the Cramer-Rao Bound on real decay
rates of multiexponential signals buried in white Gaussian noise is presented. It is then used to compare the efficiencies of some of the techniques used in the analysis of such signals. Specifically, two eigendecomposition-based techniques as well as SVD-ARMA (Singular Value Decomposition Autoregressive Moving Average) method are tested and evaluated. The two eigenvector methods were found to outperform SVD-ARMA with minimum norm being the most reliable at very low SNRs (Signal to Noise Ratios)