284,692 research outputs found
Modeling longitudinal data with interval censored anchoring events
Indiana University-Purdue University Indianapolis (IUPUI)In many longitudinal studies, the time scales upon which we assess the primary outcomes
are anchored by pre-specified events. However, these anchoring events are
often not observable and they are randomly distributed with unknown distribution.
Without direct observations of the anchoring events, the time scale used for analysis
are not available, and analysts will not be able to use the traditional longitudinal
models to describe the temporal changes as desired. Existing methods often make
either ad hoc or strong assumptions on the anchoring events, which are unveri able
and prone to biased estimation and invalid inference.
Although not able to directly observe, researchers can often ascertain an interval
that includes the unobserved anchoring events, i.e., the anchoring events are
interval censored. In this research, we proposed a two-stage method to fit commonly
used longitudinal models with interval censored anchoring events. In the first stage,
we obtain an estimate of the anchoring events distribution by nonparametric method
using the interval censored data; in the second stage, we obtain the parameter estimates
as stochastic functionals of the estimated distribution. The construction of the
stochastic functional depends on model settings. In this research, we considered two
types of models. The first model was a distribution-free model, in which no parametric
assumption was made on the distribution of the error term. The second model was
likelihood based, which extended the classic mixed-effects models to the situation that the origin of the time scale for analysis was interval censored. For the purpose
of large-sample statistical inference in both models, we studied the asymptotic
properties of the proposed functional estimator using empirical process theory. Theoretically,
our method provided a general approach to study semiparametric maximum
pseudo-likelihood estimators in similar data situations. Finite sample performance of
the proposed method were examined through simulation study. Algorithmically eff-
cient algorithms for computing the parameter estimates were provided. We applied
the proposed method to a real data analysis and obtained new findings that were
incapable using traditional mixed-effects models.2 year
A Review of Atrial Fibrillation Detection Methods as a Service
Atrial Fibrillation (AF) is a common heart arrhythmia that often goes undetected, and even if it is detected, managing the condition may be challenging. In this paper, we review how the RR interval and Electrocardiogram (ECG) signals, incorporated into a monitoring system, can be useful to track AF events. Were such an automated system to be implemented, it could be used to help manage AF and thereby reduce patient morbidity and mortality. The main impetus behind the idea of developing a service is that a greater data volume analyzed can lead to better patient outcomes. Based on the literature review, which we present herein, we introduce the methods that can be used to detect AF efficiently and automatically via the RR interval and ECG signals. A cardiovascular disease monitoring service that incorporates one or multiple of these detection methods could extend event observation to all times, and could therefore become useful to establish any AF occurrence. The development of an automated and efficient method that monitors AF in real time would likely become a key component for meeting public health goals regarding the reduction of fatalities caused by the disease. Yet, at present, significant technological and regulatory obstacles remain, which prevent the development of any proposed system. Establishment of the scientific foundation for monitoring is important to provide effective service to patients and healthcare professionals
Multiple imputation for continuous variables using a Bayesian principal component analysis
We propose a multiple imputation method based on principal component analysis
(PCA) to deal with incomplete continuous data. To reflect the uncertainty of
the parameters from one imputation to the next, we use a Bayesian treatment of
the PCA model. Using a simulation study and real data sets, the method is
compared to two classical approaches: multiple imputation based on joint
modelling and on fully conditional modelling. Contrary to the others, the
proposed method can be easily used on data sets where the number of individuals
is less than the number of variables and when the variables are highly
correlated. In addition, it provides unbiased point estimates of quantities of
interest, such as an expectation, a regression coefficient or a correlation
coefficient, with a smaller mean squared error. Furthermore, the widths of the
confidence intervals built for the quantities of interest are often smaller
whilst ensuring a valid coverage.Comment: 16 page
A Quantile Variant of the EM Algorithm and Its Applications to Parameter Estimation with Interval Data
The expectation-maximization (EM) algorithm is a powerful computational
technique for finding the maximum likelihood estimates for parametric models
when the data are not fully observed. The EM is best suited for situations
where the expectation in each E-step and the maximization in each M-step are
straightforward. A difficulty with the implementation of the EM algorithm is
that each E-step requires the integration of the log-likelihood function in
closed form. The explicit integration can be avoided by using what is known as
the Monte Carlo EM (MCEM) algorithm. The MCEM uses a random sample to estimate
the integral at each E-step. However, the problem with the MCEM is that it
often converges to the integral quite slowly and the convergence behavior can
also be unstable, which causes a computational burden. In this paper, we
propose what we refer to as the quantile variant of the EM (QEM) algorithm. We
prove that the proposed QEM method has an accuracy of while the MCEM
method has an accuracy of . Thus, the proposed QEM method
possesses faster and more stable convergence properties when compared with the
MCEM algorithm. The improved performance is illustrated through the numerical
studies. Several practical examples illustrating its use in interval-censored
data problems are also provided
Lean back and wait for the alarm? Testing an automated alarm system for nosocomial outbreaks to provide support for infection control professionals
INTRODUCTION:
Outbreaks of communicable diseases in hospitals need to be quickly detected in order to enable immediate control. The increasing digitalization of hospital data processing offers potential solutions for automated outbreak detection systems (AODS). Our goal was to assess a newly developed AODS.
METHODS:
Our AODS was based on the diagnostic results of routine clinical microbiological examinations. The system prospectively counted detections per bacterial pathogen over time for the years 2016 and 2017. The baseline data covers data from 2013-2015. The comparative analysis was based on six different mathematical algorithms (normal/Poisson and score prediction intervals, the early aberration reporting system, negative binomial CUSUMs, and the Farrington algorithm). The clusters automatically detected were then compared with the results of our manual outbreak detection system.
RESULTS:
During the analysis period, 14 different hospital outbreaks were detected as a result of conventional manual outbreak detection. Based on the pathogens' overall incidence, outbreaks were divided into two categories: outbreaks with rarely detected pathogens (sporadic) and outbreaks with often detected pathogens (endemic). For outbreaks with sporadic pathogens, the detection rate of our AODS ranged from 83% to 100%. Every algorithm detected 6 of 7 outbreaks with a sporadic pathogen. The AODS identified outbreaks with an endemic pathogen were at a detection rate of 33% to 100%. For endemic pathogens, the results varied based on the epidemiological characteristics of each outbreak and pathogen.
CONCLUSION:
AODS for hospitals based on routine microbiological data is feasible and can provide relevant benefits for infection control teams. It offers in-time automated notification of suspected pathogen clusters especially for sporadically occurring pathogens. However, outbreaks of endemically detected pathogens need further individual pathogen-specific and setting-specific adjustments
- …