298 research outputs found

    Non-Gaussian numerical errors versus mass hierarchy

    Full text link
    We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.Comment: 26 pages, 7 figures, uses Revte

    High-Accuracy Calculations of the Critical Exponents of Dyson's Hierarchical Model

    Full text link
    We calculate the critical exponent gamma of Dyson's hierarchical model by direct fits of the zero momentum two-point function, calculated with an Ising and a Landau-Ginzburg measure, and by linearization about the Koch-Wittwer fixed point. We find gamma= 1.299140730159 plus or minus 10^(-12). We extract three types of subleading corrections (in other words, a parametrization of the way the two-point function depends on the cutoff) from the fits and check the value of the first subleading exponent from the linearized procedure. We suggest that all the non-universal quantities entering the subleading corrections can be calculated systematically from the non-linear contributions about the fixed point and that this procedure would provide an alternative way to introduce the bare parameters in a field theory model.Comment: 15 pages, 9 figures, uses revte

    A Guide to Precision Calculations in Dyson's Hierarchical Scalar Field Theory

    Get PDF
    The goal of this article is to provide a practical method to calculate, in a scalar theory, accurate numerical values of the renormalized quantities which could be used to test any kind of approximate calculation. We use finite truncations of the Fourier transform of the recursion formula for Dyson's hierarchical model in the symmetric phase to perform high-precision calculations of the unsubtracted Green's functions at zero momentum in dimension 3, 4, and 5. We use the well-known correspondence between statistical mechanics and field theory in which the large cut-off limit is obtained by letting beta reach a critical value beta_c (with up to 16 significant digits in our actual calculations). We show that the round-off errors on the magnetic susceptibility grow like (beta_c -beta)^{-1} near criticality. We show that the systematic errors (finite truncations and volume) can be controlled with an exponential precision and reduced to a level lower than the numerical errors. We justify the use of the truncation for calculations of the high-temperature expansion. We calculate the dimensionless renormalized coupling constant corresponding to the 4-point function and show that when beta -> beta_c, this quantity tends to a fixed value which can be determined accurately when D=3 (hyperscaling holds), and goes to zero like (Ln(beta_c -beta))^{-1} when D=4.Comment: Uses revtex with psfig, 31 pages including 15 figure

    Multiple landmark detection using multi-agent reinforcement learning

    Get PDF
    The detection of anatomical landmarks is a vital step for medical image analysis and applications for diagnosis, interpretation and guidance. Manual annotation of landmarks is a tedious process that requires domain-specific expertise and introduces inter-observer variability. This paper proposes a new detection approach for multiple landmarks based on multi-agent reinforcement learning. Our hypothesis is that the position of all anatomical landmarks is interdependent and non-random within the human anatomy, thus finding one landmark can help to deduce the location of others. Using a Deep Q-Network (DQN) architecture we construct an environment and agent with implicit inter-communication such that we can accommodate K agents acting and learning simultaneously, while they attempt to detect K different landmarks. During training the agents collaborate by sharing their accumulated knowledge for a collective gain. We compare our approach with state-of-the-art architectures and achieve significantly better accuracy by reducing the detection error by 50%, while requiring fewer computational resources and time to train compared to the naïve approach of training K agents separately. Code and visualizations available: https://github.com/thanosvlo/MARL-for-Anatomical-Landmark-Detectio

    A persistent homology-based topological loss function for multi-class CNN segmentation of cardiac MRI

    Full text link
    With respect to spatial overlap, CNN-based segmentation of short axis cardiovascular magnetic resonance (CMR) images has achieved a level of performance consistent with inter observer variation. However, conventional training procedures frequently depend on pixel-wise loss functions, limiting optimisation with respect to extended or global features. As a result, inferred segmentations can lack spatial coherence, including spurious connected components or holes. Such results are implausible, violating the anticipated topology of image segments, which is frequently known a priori. Addressing this challenge, published work has employed persistent homology, constructing topological loss functions for the evaluation of image segments against an explicit prior. Building a richer description of segmentation topology by considering all possible labels and label pairs, we extend these losses to the task of multi-class segmentation. These topological priors allow us to resolve all topological errors in a subset of 150 examples from the ACDC short axis CMR training data set, without sacrificing overlap performance.Comment: To be presented at the STACOM workshop at MICCAI 202

    Progress in finite temperature lattice QCD

    Get PDF
    I review recent progress in finite temperature lattice calculations, including the determination of the transition temperature, equation of state, screening of static quarks and meson spectral functions.Comment: 8 pages, LaTeX, uses iopart.cls, invited talk presented at Strangeness in Quark Matter 2007 (SQM 2007), Levoca, Slovakia, June 24-29, 200

    An example of secondary fault activity along the North Anatolian Fault on the NE Marmara Sea Shelf, NW Turkey

    Full text link
    Seismic data on the NE Marmara Sea Shelf indicate that a NNE-SSW-oriented buried basin and ridge system exist on the sub-marine extension of the Paleozoic Rocks delimited by the northern segment of the North Anatolian Fault (NS-NAF), while seismic and multi-beam bathymetric data imply that four NW-SE-oriented strike-slip faults also exist on the shelf area. Seismic data indicate that NW-SE-oriented strike-slip faults are the youngest structures that dissect the basin-ridge system. One of the NW-SE-oriented faults (F1) is aligned with a rupture of the North Anatolian Fault (NAF) cutting the northern slope of the Cinarcik Basin. This observation indicates that these faults have similar characteristics with the NS-NAF along the Marmara Sea. Therefore, they may have a secondary relation to the NAF since the principle deformation zone of the NAF follows the Marmara Trough in that region. The seismic energy recorded on these secondary faults is much less than that on the NAF in the Marmara Sea. These faults may, however, produce a large earthquake in the long term

    Full nonperturbative QCD simulations with 2+1 flavors of improved staggered quarks

    Full text link
    Dramatic progress has been made over the last decade in the numerical study of quantum chromodynamics (QCD) through the use of improved formulations of QCD on the lattice (improved actions), the development of new algorithms and the rapid increase in computing power available to lattice gauge theorists. In this article we describe simulations of full QCD using the improved staggered quark formalism, ``asqtad'' fermions. These simulations were carried out with two degenerate flavors of light quarks (up and down) and with one heavier flavor, the strange quark. Several light quark masses, down to about 3 times the physical light quark mass, and six lattice spacings have been used. These enable controlled continuum and chiral extrapolations of many low energy QCD observables. We review the improved staggered formalism, emphasizing both advantages and drawbacks. In particular, we review the procedure for removing unwanted staggered species in the continuum limit. We then describe the asqtad lattice ensembles created by the MILC Collaboration. All MILC lattice ensembles are publicly available, and they have been used extensively by a number of lattice gauge theory groups. We review physics results obtained with them, and discuss the impact of these results on phenomenology. Topics include the heavy quark potential, spectrum of light hadrons, quark masses, decay constant of light and heavy-light pseudoscalar mesons, semileptonic form factors, nucleon structure, scattering lengths and more. We conclude with a brief look at highly promising future prospects.Comment: 157 pages; prepared for Reviews of Modern Physics. v2: some rewriting throughout; references update

    Automatically Segmenting the Left Atrium from Cardiac Images Using Successive 3D U-Nets and a Contour Loss

    Get PDF
    International audienceRadiological imaging offers effective measurement of anatomy, which is useful in disease diagnosis and assessment. Previous study has shown that the left atrial wall remodeling can provide information to predict treatment outcome in atrial fibrillation. Nevertheless, the segmentation of the left atrial structures from medical images is still very time-consuming. Current advances in neural network may help creating automatic segmentation models that reduce the workload for clinicians. In this preliminary study, we propose automated, two-stage, three-dimensional U-Nets with convolutional neural network, for the challenging task of left atrial segmentation. Unlike previous two-dimensional image segmentation methods, we use 3D U-Nets to obtain the heart cavity directly in 3D. The dual 3D U-Net structure consists of, a first U-Net to coarsely segment and locate the left atrium, and a second U-Net to accurately segment the left atrium under higher resolution. In addition, we introduce a Contour loss based on additional distance information to adjust the final segmentation. We randomly split the data into training datasets (80 subjects) and validation datasets (20 subjects) to train multiple models, with different augmentation setting. Experiments show that the average Dice coefficients for validation datasets are around 0.91 - 0.92, the sensitivity around 0.90-0.94 and the specificity 0.99. Compared with traditional Dice loss, models trained with Contour loss in general offer smaller Hausdorff distance with similar Dice coefficient, and have less connected components in predictions. Finally, we integrate several trained models in an ensemble prediction to segment testing datasets
    corecore