199,728 research outputs found
Fusing Continuous-valued Medical Labels using a Bayesian Model
With the rapid increase in volume of time series medical data available
through wearable devices, there is a need to employ automated algorithms to
label data. Examples of labels include interventions, changes in activity (e.g.
sleep) and changes in physiology (e.g. arrhythmias). However, automated
algorithms tend to be unreliable resulting in lower quality care. Expert
annotations are scarce, expensive, and prone to significant inter- and
intra-observer variance. To address these problems, a Bayesian
Continuous-valued Label Aggregator(BCLA) is proposed to provide a reliable
estimation of label aggregation while accurately infer the precision and bias
of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic
indicator) estimation from the electrocardiogram using labels from the 2006
PhysioNet/Computing in Cardiology Challenge database. It was compared to the
mean, median, and a previously proposed Expectation Maximization (EM) label
aggregation approaches. While accurately predicting each labelling algorithm's
bias and precision, the root-mean-square error of the BCLA was
11.780.63ms, significantly outperforming the best Challenge entry
(15.372.13ms) as well as the EM, mean, and median voting strategies
(14.760.52ms, 17.610.55ms, and 14.430.57ms respectively with
)
Nearly Optimal Computations with Structured Matrices
We estimate the Boolean complexity of multiplication of structured matrices
by a vector and the solution of nonsingular linear systems of equations with
these matrices. We study four basic most popular classes, that is, Toeplitz,
Hankel, Cauchy and Van-der-monde matrices, for which the cited computational
problems are equivalent to the task of polynomial multiplication and division
and polynomial and rational multipoint evaluation and interpolation. The
Boolean cost estimates for the latter problems have been obtained by Kirrinnis
in \cite{kirrinnis-joc-1998}, except for rational interpolation, which we
supply now. All known Boolean cost estimates for these problems rely on using
Kronecker product. This implies the -fold precision increase for the -th
degree output, but we avoid such an increase by relying on distinct techniques
based on employing FFT. Furthermore we simplify the analysis and make it more
transparent by combining the representation of our tasks and algorithms in
terms of both structured matrices and polynomials and rational functions. This
also enables further extensions of our estimates to cover Trummer's important
problem and computations with the popular classes of structured matrices that
generalize the four cited basic matrix classes.Comment: (2014-04-10
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
Data Processing Protocol for Regression of Geothermal Times Series with Uneven Intervals
Regression of data generated in simulations or experiments has important
implications in sensitivity studies, uncertainty analysis, and prediction
accuracy. Depending on the nature of the physical model, data points may not be
evenly distributed. It is not often practical to choose all points for
regression of a model because it doesn't always guarantee a better fit. Fitness
of the model is highly dependent on the number of data points and the
distribution of the data along the curve. In this study, the effect of the
number of points selected for regression is investigated and various schemes
aimed to process regression data points are explored. Time series data i.e.,
output varying with time, is our prime interest mainly the temperature profile
from enhanced geothermal system. The objective of the research is to find a
better scheme for choosing a fraction of data points from the entire set to
find a better fitness of the model without losing any features or trends in the
data. A workflow is provided to summarize the entire protocol of data
preprocessing, regression of mathematical model using training data, model
testing, and error analysis. Six different schemes are developed to process
data by setting criteria such as equal spacing along axes (X and Y), equal
distance between two consecutive points on the curve, constraint in the angle
of curvature, etc. As an example for the application of the proposed schemes, 1
to 20% of the data generated from the temperature change of a typical
geothermal system is chosen from a total of 9939 points. It is shown that the
number of data points, to a degree, has negligible effect on the fitted model
depending on the scheme. The proposed data processing schemes are ranked in
terms of R2 and NRMSE values
- …