140 research outputs found
Optimized Negative Dimensional Integration Method (NDIM) and multiloop Feynman diagram calculation
We present an improved form of the integration technique known as NDIM
(Negative Dimensional Integration Method), which is a powerful tool in the
analytical evaluation of Feynman diagrams. Using this technique we study a theory in dimensions, considering
generic topologies of loops and independent external momenta, and where
the propagator powers are arbitrary. The method transforms the Schwinger
parametric integral associated to the diagram into a multiple series expansion,
whose main characteristic is that the argument contains several Kronecker
deltas which appear naturally in the application of the method, and which we
call diagram presolution. The optimization we present here consists in a
procedure that minimizes the series multiplicity, through appropriate
factorizations in the multinomials that appear in the parametric integral, and
which maximizes the number of Kronecker deltas that are generated in the
process. The solutions are presented in terms of generalized hypergeometric
functions, obtained once the Kronecker deltas have been used in the series.
Although the technique is general, we apply it to cases in which there are 2or3 different energy scales (masses or kinematic variables associated to
the external momenta), obtaining solutions in terms of a finite sum of
generalized hypergeometric series de 1 and 2 variables respectively, each of
them expressible as ratios between the different energy scales that
characterize the topology. The main result is a method capable of solving
Feynman integrals, expressing the solutions as hypergeometric series of
multiplicity , where is the number of energy scales present in the
diagram.Comment: 49 pages, 14 figure
Ensemble Transport Adaptive Importance Sampling
Markov chain Monte Carlo methods are a powerful and commonly used family of
numerical methods for sampling from complex probability distributions. As
applications of these methods increase in size and complexity, the need for
efficient methods increases. In this paper, we present a particle ensemble
algorithm. At each iteration, an importance sampling proposal distribution is
formed using an ensemble of particles. A stratified sample is taken from this
distribution and weighted under the posterior, a state-of-the-art ensemble
transport resampling method is then used to create an evenly weighted sample
ready for the next iteration. We demonstrate that this ensemble transport
adaptive importance sampling (ETAIS) method outperforms MCMC methods with
equivalent proposal distributions for low dimensional problems, and in fact
shows better than linear improvements in convergence rates with respect to the
number of ensemble members. We also introduce a new resampling strategy,
multinomial transformation (MT), which while not as accurate as the ensemble
transport resampler, is substantially less costly for large ensemble sizes, and
can then be used in conjunction with ETAIS for complex problems. We also focus
on how algorithmic parameters regarding the mixture proposal can be quickly
tuned to optimise performance. In particular, we demonstrate this methodology's
superior sampling for multimodal problems, such as those arising from inference
for mixture models, and for problems with expensive likelihoods requiring the
solution of a differential equation, for which speed-ups of orders of magnitude
are demonstrated. Likelihood evaluations of the ensemble could be computed in a
distributed manner, suggesting that this methodology is a good candidate for
parallel Bayesian computations
Topically Driven Neural Language Model
Language models are typically applied at the sentence level, without access
to the broader document context. We present a neural language model that
incorporates document context in the form of a topic model-like architecture,
thus providing a succinct representation of the broader document context
outside of the current sentence. Experiments over a range of datasets
demonstrate that our model outperforms a pure sentence-based model in terms of
language model perplexity, and leads to topics that are potentially more
coherent than those produced by a standard LDA topic model. Our model also has
the ability to generate related sentences for a topic, providing another way to
interpret topics.Comment: 11 pages, Proceedings of the 55th Annual Meeting of the Association
for Computational Linguistics (ACL 2017) (to appear
Frustrated spin- Heisenberg magnet on a square-lattice bilayer: High-order study of the quantum critical behavior of the ---- model
The zero-temperature phase diagram of the spin-
---- model on an -stacked square-lattice
bilayer is studied using the coupled cluster method implemented to very high
orders. Both nearest-neighbor (NN) and frustrating next-nearest-neighbor
Heisenberg exchange interactions, of strengths and , respectively, are included in each layer. The two layers are
coupled via a NN interlayer Heisenberg exchange interaction with a strength
. The magnetic order parameter (viz.,
the sublattice magnetization) is calculated directly in the thermodynamic
(infinite-lattice) limit for the two cases when both layers have
antiferromagnetic ordering of either the N\'{e}el or the striped kind, and with
the layers coupled so that NN spins between them are either parallel (when
) to one another. Calculations
are performed at th order in a well-defined sequence of approximations,
which exactly preserve both the Goldstone linked cluster theorem and the
Hellmann-Feynman theorem, with . The sole approximation made is to
extrapolate such sequences of th-order results for to the exact limit,
. By thus locating the points where vanishes, we calculate
the full phase boundaries of the two collinear AFM phases in the
-- half-plane with . In particular, we provide the
accurate estimate, (), for the
position of the quantum triple point (QTP) in the region . We also
show that there is no counterpart of such a QTP in the region ,
where the two quasiclassical phase boundaries show instead an ``avoided
crossing'' behavior, such that the entire region that contains the nonclassical
paramagnetic phases is singly connected
Resumming QCD perturbation series
Since the advent of Quantum Field Theory (QFT) in the late 1940's, perturbation theory has become one of the most developed and successful means of extracting phenomenologically useful information from a QFT. In the ever- increasing enthusiasm for new phenomenological predictions, the mechanics of perturbation theory itself have often taken a back seat. It is in this light that this thesis aims to investigate some of the more fundamental properties of perturbation theory. The benefits of resumming perturbative series are highlighted by the explicit calculation of the three-jet rate in e+e- annihilation, resummed to all orders in leading and next-to-leading large logarithms. It is found that the result can be expressed simply in terms of exponentials and error functions. In general it is found that perturbative expansions in QED and QCD diverge at large orders. The nature of these divergences has been explored and found to come from two sources. The first are instanton singularities, which correspond to the combinatoric factors involved in counting Feynman diagrams at large orders. The second are renormalon singularities, which are closely linked to non-perturbative effects through the operator product expansion (OPE).By using Borel transform techniques, the singularity structure in the Borel plane for the QCD vacuum polarization is studied in detail. The renormalon singularity structure is as expected from OPE considerations. These results and existing exact large-A^/ results for the QCD Adler D-function and Deep Inelastic Scattering sum rules are used to resum to all orders the portion of the QCD perturbative coefficients which is leading in b, the first coefficient of the QCD beta-function. This part is expected asymptotically to dominate the coefficients in a large-Nj expansion. Resummed results are also obtained for the e+e- R-ratio and the r-lepton decay ratio. The renormalization scheme dependence of these resummed results is discussed in some detail
Multiresolution analysis of electronic structure: semicardinal and wavelet bases
This article reviews recent developments in multiresolution analysis which
make it a powerful tool for the systematic treatment of the multiple
length-scales inherent in the electronic structure of matter. Although the
article focuses on electronic structure, the advances described are useful for
non-linear problems in the physical sciences in general. The new language and
notations introduced are well- suited for both formal manipulations and the
development of computer software using higher-level languages such as C++. The
discussion is self-contained, and all needed algorithms are specified
explicitly in terms of simple operators and illustrated with straightforward
diagrams which show the flow of data. Among the reviewed developments is the
construction of_exact_ multiresolution representations from extremely limited
samples of physical fields in real space. This new and profound result is the
critical advance in finally allowing systematic, all electron calculations to
compete in efficiency with state-of-the-art electronic structure calculations
which depend for their celerity upon freezing the core electronic degrees of
freedom. This review presents the theory of wavelets from a physical
perspective, provides a unified and self-contained treatment of non-linear
couplings and physical operators and introduces a modern framework for
effective single-particle theories of quantum mechanics.Comment: A "how-to from-scratch" book presently in press at Reviews of Modern
Physics: 88 pages, 31 figures, 5 tables, 88 references. Significantly
IMPROVED version, including (a) new diagrams illustrating algorithms; (b)
careful proof-reading of equations and text; (c) expanded bibliography; (d)
cosmetic changes including lists of figures and tables and a more reasonable
font. Latest changes (Dec. 11, 1998): a more descriptive abstract, and minor
lexicographical change
NITPICK: peak identification for mass spectrometry data
<p>Abstract</p> <p>Background</p> <p>The reliable extraction of features from mass spectra is a fundamental step in the automated analysis of proteomic mass spectrometry (MS) experiments.</p> <p>Results</p> <p>This contribution proposes a sparse template regression approach to peak picking called NITPICK. NITPICK is a Non-greedy, Iterative Template-based peak PICKer that deconvolves complex overlapping isotope distributions in multicomponent mass spectra. NITPICK is based on <it>fractional averagine</it>, a novel extension to Senko's well-known averagine model, and on a modified version of sparse, non-negative least angle regression, for which a suitable, statistically motivated early stopping criterion has been derived. The strength of NITPICK is the deconvolution of overlapping mixture mass spectra.</p> <p>Conclusion</p> <p>Extensive comparative evaluation has been carried out and results are provided for simulated and real-world data sets. NITPICK outperforms pepex, to date the only alternate, publicly available, non-greedy feature extraction routine. NITPICK is available as software package for the R programming language and can be downloaded from <url>http://hci.iwr.uni-heidelberg.de/mip/proteomics/</url>.</p
Uncertainty Quantification for Electromagnetic Analysis via Efficient Collocation Methods.
Electromagnetic (EM) devices and systems often are fraught by uncertainty in their geometry, configuration, and excitation. These uncertainties (often termed “random variables”) strongly and nonlinearly impact voltages and currents on mission-critical circuits or receivers (often termed “observables”). To ensure the functionality of such circuits or receivers, this dependency should be statistically characterized.
In this thesis, efficient collocation methods for uncertainty quantification in EM analysis are presented. First, a Stroud-based stochastic collocation method is introduced to statistically characterize electromagnetic compatibility and interference (EMC/EMI) phenomena on electrically large and complex platforms. Second, a multi-element probabilistic collocation (ME-PC) method suitable for characterizing rapidly varying and/or discontinuous observables is presented. Its applications to the statistical characterization of EMC/EMI phenomena on electrically and complex platforms and transverse magnetic wave propagation in complex mine environments are demonstrated. In addition, the ME-PC method is applied to the statistical characterization of EM wave propagation in complex mine environments with the aid of a novel fast multipole method and fast Fourier transform-accelerated surface integral equation solver -- the first-ever full-wave solver capable of characterizing EM wave propagation in hundreds of wavelengths long mine tunnels. Finally, an iterative high-dimensional model representation technique is proposed to statistically characterize EMC/EMI observables that involve a large number of random variables. The application of this technique to the genetic algorithm based optimization of EM devices is presented as well.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/100086/1/acyucel_1.pd
- …