112 research outputs found
Topological Susceptibility under Gradient Flow
We study the impact of the Gradient Flow on the topology in various models of
lattice field theory. The topological susceptibility is measured
directly, and by the slab method, which is based on the topological content of
sub-volumes ("slabs") and estimates even when the system remains
trapped in a fixed topological sector. The results obtained by both methods are
essentially consistent, but the impact of the Gradient Flow on the
characteristic quantity of the slab method seems to be different in 2-flavour
QCD and in the 2d O(3) model. In the latter model, we further address the
question whether or not the Gradient Flow leads to a finite continuum limit of
the topological susceptibility (rescaled by the correlation length squared,
). This ongoing study is based on direct measurements of in lattices, at .Comment: 8 pages, LaTex, 5 figures, talk presented at the 35th International
Symposium on Lattice Field Theory, June 18-24, 2017, Granada, Spai
tsungtingkuo/explorerchain v1.0.0
EXpectation Propagation LOgistic REgRession on permissioned blockCHAIN (ExplorerChain): Decentralized privacy-preserving online healthcare/genomics predictive model learning</p
Details of the training and test datasets in our second experiment.
<p>Details of the training and test datasets in our second experiment.</p
<i>Discrimination</i> plots (ROC curves) and <i>Calibration</i> plots for simulated models.
<p>(a) Perfect <i>discrimination</i> (i.e., AUC = 1) requires a classifier with perfect dichotomous predictions, which in the calibration plot has only one point (0,0) for negative observations and one point (1,1) for positive observations. (b) Poor <i>discrimination</i> (i.e., AUC = 0.530.02) and poor <i>calibration</i> (i.e., = 251.2765.2, <1e−10). (c) Good <i>discrimination</i> (i.e., AUC = 0.830.03) and excellent <i>calibration</i> (i.e., = 10.024.42, = 0.260.82). (d) Excellent <i>discrimination</i> (i.e., AUC = 0.960.01) and mediocre <i>calibration</i> (i.e., = 34.462.77, = 00.95). Note that a HL statistic smaller than 13.36 indicates that the model fits well at the significance level of 0.1.</p
Performance comparisons between three different models using breast cancer datasets.
<p>Performance comparisons between three different models using breast cancer datasets.</p
Details of the training and test datasets in our first experiment.
<p>Details of the training and test datasets in our first experiment.</p
ROC, AUC and its calculation.
<p>The horizontal line shows sorted probabilistic estimates on “scores” s. In (a) and (b), we show the ROC and the AUC for a classifier built from an artificial dataset. In (c) and (d), we show concordant and discordant pairs, where concordant means that an estimate for a positive observation is higher than an estimate for a negative one. The AUC can be interpreted in the same way as the c-index: the proportion of concordant pairs. Note that corresponds to an observation, represents its predicted score, and represents its observed class label, i.e., the gold standard. AUC is calculated as the fraction of concordant pairs out of a total number of instance pairs where an element is positive and the other is negative. Note that is the indicator function.</p
Reliability diagrams and two types of HL-test.
<p>In (a), (b), and (c), we visually illustrate the reliability diagram, and groupings used for the HL-H test and the HL-C test, respectively.</p
Confusion matrix of a classifier based on the gold standard of class labels.
<p>Confusion matrix of a classifier based on the gold standard of class labels.</p
Performance comparison between using GSE2034 and GSE2990.
<p>Performance comparison between using GSE2034 and GSE2990.</p
- …