9,767 research outputs found
On central tendency and dispersion measures for intervals and hypercubes
The uncertainty or the variability of the data may be treated by considering,
rather than a single value for each data, the interval of values in which it
may fall. This paper studies the derivation of basic description statistics for
interval-valued datasets. We propose a geometrical approach in the
determination of summary statistics (central tendency and dispersion measures)
for interval-valued variables
A Recursive Algorithm for Computing Inferences in Imprecise Markov Chains
We present an algorithm that can efficiently compute a broad class of
inferences for discrete-time imprecise Markov chains, a generalised type of
Markov chains that allows one to take into account partially specified
probabilities and other types of model uncertainty. The class of inferences
that we consider contains, as special cases, tight lower and upper bounds on
expected hitting times, on hitting probabilities and on expectations of
functions that are a sum or product of simpler ones. Our algorithm exploits the
specific structure that is inherent in all these inferences: they admit a
general recursive decomposition. This allows us to achieve a computational
complexity that scales linearly in the number of time points on which the
inference depends, instead of the exponential scaling that is typical for a
naive approach
p53 overexpression is a predictor of local recurrence after treatment for both in situ and invasive ductal carcinoma of the breast
Background. Several biological markers have been related to prognosis in mammary ductal carcinoma. The aim of the study was to determine biological markers that could predict local recurrence following treatment for all stages of primary operable ductal carcinoma of the breast. Materials and methods. A consecutive series of patients treated for pure ductal carcinoma in situ (DCIS, n = 110) and invasive ductal carcinoma (IDC, n = 243) was studied. Twenty-three patients with DCIS were excluded because of lack of original paraffin embedded tissue. All patients had been treated between July 1996 and December 2001. Median follow-up was 49.8 mo. From the original paraffin embedded tumors, tissue microarrays (TMAs) were constructed. On these TMAs, immunohistochemistry was performed for estrogen-receptor (ER), progesterone-receptor (PR), Her2/neu, p53, and cyclin D1. Main outcome was the event of LR. All analyses were stratified for diagnosis (DCIS or IDC) and pathological grade. Results. In univariate analyses, Her2/neu overexpression (hazard ratio [HR] 3.1, 95% confidence interval [CI] 1.1-8.7, P = 0.032) and p53 overexpression (HR 3.5, 95% Cl 1.3-9.3, P = 0.014) were associated with LR in patients treated for both DCIS and IDC. In multivariate analysis, p53 overexpression (HR 3.0, 95% CI 1.1-8.2, P = 0.036 and HR 4.4,95% Cl 1.5-12.9, P = 0.008) and adjuvant radiotherapy (HR 0.2, 95% Cl 0.1-0.8, P = 0.026) were independent common predictors of LR in patients who had received treatment for both DCIS and IDC. Conclusions. p53 overexpression is a common predictor of LR following treatment for all stages of primary operable ductal carcinoma of the breast. This marker may help in planning optimal treatment and follow-up. (C) 2007 Elsevier Inc. All rights reserved
Computable randomness is about more than probabilities
We introduce a notion of computable randomness for infinite sequences that
generalises the classical version in two important ways. First, our definition
of computable randomness is associated with imprecise probability models, in
the sense that we consider lower expectations (or sets of probabilities)
instead of classical 'precise' probabilities. Secondly, instead of binary
sequences, we consider sequences whose elements take values in some finite
sample space. Interestingly, we find that every sequence is computably random
with respect to at least one lower expectation, and that lower expectations
that are more informative have fewer computably random sequences. This leads to
the intriguing question whether every sequence is computably random with
respect to a unique most informative lower expectation. We study this question
in some detail and provide a partial answer
Investigating the trade-off between the effectiveness and efficiency of process modeling
Despite recent efforts to improve the quality of process models, we still observe a significant dissimilarity in quality between models. This paper focuses on the syntactic condition of process models, and how it is achieved. To this end, a dataset of 121 modeling sessions was investigated. By going through each of these sessions step by step, a separate ‘revision’ phase was identified for 81 of them. Next, by cutting the modeling process off at the start of the revision phase, a partial process model was exported for these modeling sessions. Finally, each partial model was compared with its corresponding final model, in terms of time, effort, and the number of syntactic errors made or solved, in search for a possible trade-off between the effectiveness and efficiency of process modeling. Based on the findings, we give a provisional explanation for the difference in syntactic quality of process models
Computing Inferences for Large-Scale Continuous-Time Markov Chains by Combining Lumping with Imprecision
If the state space of a homogeneous continuous-time Markov chain is too
large, making inferences - here limited to determining marginal or limit
expectations - becomes computationally infeasible. Fortunately, the state space
of such a chain is usually too detailed for the inferences we are interested
in, in the sense that a less detailed - smaller - state space suffices to
unambiguously formalise the inference. However, in general this so-called
lumped state space inhibits computing exact inferences because the
corresponding dynamics are unknown and/or intractable to obtain. We address
this issue by considering an imprecise continuous-time Markov chain. In this
way, we are able to provide guaranteed lower and upper bounds for the
inferences of interest, without suffering from the curse of dimensionality.Comment: 9th International Conference on Soft Methods in Probability and
Statistics (SMPS 2018
- …