1,164 research outputs found

    Numerical Stochastic Perturbation Theory. Convergence and features of the stochastic process. Computations at fixed (Landau) Gauge

    Get PDF
    Concerning Numerical Stochastic Perturbation Theory, we discuss the convergence of the stochastic process (idea of the proof, features of the limit distribution, rate of convergence to equilibrium). Then we also discuss the expected fluctuations in the observables and give some idea to reduce them. In the end we show that also computation of quantities at fixed (Landau) Gauge is now possible.Comment: 3 pages. Contributed to 17th International Symposium on Lattice Field Theory (LATTICE 99), Pisa, Italy, 29 Jun - 3 Jul 199

    Beta-function, Renormalons and the Mass Term from Perturbative Wilson Loops

    Get PDF
    Several Wilson loops on several lattice sizes are computed in Perturbation Theory via a stochastic method. Applications include: Renormalons, the Mass Term in Heavy Quark Effective Theory and (possibly) the beta-function.Comment: 3 pages, 1 eps figure. Contributed to 17th International Symposium on Lattice Field Theory (LATTICE 99), Pisa, Italy, 29 Jun - 3 Jul 199

    New issues for Numerical Stochastic Perturbation Theory

    Get PDF
    First attempts in the application of Numerical Stochastic Perturbation Theory (NSPT) to the problem of pushing one loop further the computation of SU(3) (SU(2)) pertubative beta function (in different schemes) are reviewed and the relevance of such a computation is discussed. Other issues include the proposal of a different strategy for gauge-fixed NSPT computations in lattice QCD.Comment: 3 pages, Latex, LATTICE98(algorithms

    The role of center vortices in Gribov's confinement scenario

    Full text link
    The connection of Gribov's confinement scenario in Coulomb gauge with the center vortex picture of confinement is investigated. For this purpose we assume a vacuum wave functional which models the infrared properties of the theory and in particular shows strict confinement, i.e. an area law of the Wilson loop. We isolate the center vortex content of this wave functional by standard lattice methods and investigate their contributions to various static propagators of the Hamilton approach to Yang-Mills theory in Coulomb gauge. We find that the infrared properties of these quantities, in particular the infrared divergence of the ghost form factor, are dominated by center vortices.Comment: 18 pages, 5 figure

    A holistic multimodal approach to the non-invasive analysis of watercolour paintings

    Get PDF
    A holistic approach using non-invasive multimodal imaging and spectroscopic techniques to study the materials (pigments, drawing materials and paper) and painting techniques of watercolour paintings is presented. The non-invasive imaging and spectroscopic techniques include VIS-NIR reflectance spectroscopy and multispectral imaging, micro-Raman spectroscopy, X-ray fluorescence spectroscopy (XRF) and optical coherence tomography (OCT). The three spectroscopic techniques complement each other in pigment identification. Multispectral imaging (near infrared bands), OCT and micro-Raman complement each other in the visualisation and identification of the drawing material. OCT probes the microstructure and light scattering properties of the substrate while XRF detects the elemental composition that indicates the sizing methods and the filler content . The multiple techniques were applied in a study of forty six 19th century Chinese export watercolours from the Victoria & Albert Museum (V&A) and the Royal Horticultural Society (RHS) to examine to what extent the non-invasive analysis techniques employed complement each other and how much useful information about the paintings can be extracted to address art conservation and history questions

    Artificial Neural Networks: The Missing Link Between Curiosity and Accuracy

    Get PDF
    Artificial Neural Networks, as the name itself suggests, are biologically inspired algorithms designed to simulate the way in which the human brain processes information. Like neurons, which consist of a cell nucleus that receives input from other neurons through a web of input terminals, an Artificial Neural Network includes hundreds of single units, artificial neurons or processing elements, connected with coefficients (weights), and are organized in layers. The power of neural computations comes from connecting neurons in a network: in fact, in an Artificial Neural Network it is possible to manage a different number of information at the same time. What is not fully understood is which is the most efficient way to train an Artificial Neural Network, and in particular what is the best mini-batch size for maximize accuracy while minimizing training time. The idea that will be developed in this study has its roots in the biological world, that inspired the creation of Artificial Neural Network in the first place. Humans have altered the face of the world through extraordinary adaptive and technological advances: those changes were made possible by our cognitive structure, particularly the ability to reasoning and build causal models of external events. This dynamism is made possible by a high degree of curiosity. In the biological world, and especially in human beings, curiosity arises from the constant search of knowledge and information: behaviours that support the information sampling mechanism range from the very small (initial mini-batch size) to the very elaborate sustained (increasing mini-batch size). The goal of this project is to train an Artificial Neural Network by increasing dynamically, in an adaptive manner (with validation set), the mini-batch size; our hypothesis is that this training method will be more efficient (in terms of time and costs) compared to the ones implemented so far
    • …
    corecore