3,876 research outputs found
Collective fields in the functional renormalization group for fermions, Ward identities, and the exact solution of the Tomonaga-Luttinger model
We develop a new formulation of the functional renormalization group (RG) for
interacting fermions. Our approach unifies the purely fermionic formulation
based on the Grassmannian functional integral, which has been used in recent
years by many authors, with the traditional Wilsonian RG approach to quantum
systems pioneered by Hertz [Phys. Rev. B 14, 1165 (1976)], which attempts to
describe the infrared behavior of the system in terms of an effective bosonic
theory associated with the soft modes of the underlying fermionic problem. In
our approach, we decouple the interaction by means of a suitable
Hubbard-Stratonovich transformation (following the Hertz-approach), but do not
eliminate the fermions; instead, we derive an exact hierarchy of RG flow
equations for the irreducible vertices of the resulting coupled field theory
involving both fermionic and bosonic fields. The freedom of choosing a momentum
transfer cutoff for the bosonic soft modes in addition to the usual band cutoff
for the fermions opens the possibility of new RG schemes. In particular, we
show how the exact solution of the Tomonaga-Luttinger model emerges from the
functional RG if one works with a momentum transfer cutoff. Then the Ward
identities associated with the local particle conservation at each Fermi point
are valid at every stage of the RG flow and provide a solution of an infinite
hierarchy of flow equations for the irreducible vertices. The RG flow equation
for the irreducible single-particle self-energy can then be closed and can be
reduced to a linear integro-differential equation, the solution of which yields
the result familiar from bosonization. We suggest new truncation schemes of the
exact hierarchy of flow equations, which might be useful even outside the weak
coupling regime.Comment: 27 pages, 15 figures; published version, some typos correcte
Dynamical Dark Matter: II. An Explicit Model
In a recent paper (arXiv:1106.4546), we introduced "dynamical dark matter," a
new framework for dark-matter physics, and outlined its underlying theoretical
principles and phenomenological possibilities. Unlike most traditional
approaches to the dark-matter problem which hypothesize the existence of one or
more stable dark-matter particles, our dynamical dark-matter framework is
characterized by the fact that the requirement of stability is replaced by a
delicate balancing between cosmological abundances and lifetimes across a vast
ensemble of individual dark-matter components. This setup therefore
collectively produces a time-varying cosmological dark-matter abundance, and
the different dark-matter components can interact and decay throughout the
current epoch. While the goal of our previous paper was to introduce the broad
theoretical aspects of this framework, the purpose of the current paper is to
provide an explicit model of dynamical dark matter and demonstrate that this
model satisfies all collider, astrophysical, and cosmological constraints. The
results of this paper therefore constitute an "existence proof" of the
phenomenological viability of our overall dynamical dark-matter framework, and
demonstrate that dynamical dark matter is indeed a viable alternative to the
traditional paradigm of dark-matter physics. Dynamical dark matter must
therefore be considered alongside other approaches to the dark-matter problem,
particularly in scenarios involving large extra dimensions or string theory in
which there exist large numbers of particles which are neutral under
Standard-Model symmetries.Comment: 45 pages, LaTeX, 10 figures. Replaced to match published versio
About one long-range contribution to K+ -> pi+ l+ l- decays
We investigate the mechanism of K+ -> pi+ l+ l- (l= e, mu) decays in which a
virtual photon is emitted either from the incoming K+ or the outgoing pi+. We
point out some inconsistencies with and between two previous calculations,
discuss the possible experimental inputs, and estimate the branching fractions.
This mechanism alone fails to explain the existing experimental data by more
than one order-of-magnitude. But it may show itself by its interference with
the leading long-range mechanism dominated by the a_1^+ and rho^0 mesons.Comment: 12 pages, RevTeX, epsf.sty, 2 embedded figure
Classification of the Nuclear Multifragmentation Phase Transition
Using a recently proposed classification scheme for phase transitions in
finite systems [Phys.Rev.Lett.{\bf 84},3511 (2000)] we show that within the
statistical standard model of nuclear multifragmentation the predicted phase
transition is of first order.Comment: 5 pages, 4 eps figures, accepted for publication in Phys.Rev.C (in
press
Predictions of Heat Transfer and Flow Circulations in Differentially Heated Liquid Columns With Applications to Low-Pressure Evaporators
Numerical computations are presented for the temperature and velocity distributions of two differentially heated liquid columns with liquor depths of 0.1 m and 2.215 m, respectively. The temperatures in the liquid columns vary considerably with respect to position for pure conduction, free convection, and nucleate boiling cases using one-dimensional (1D) thermal resistance networks. In the thermal resistance networks the solutions are not sensitive to the type of condensing and boiling heat transfer coefficients used. However, these networks are limited and give no indication of velocity distributions occurring within the liquor. To alleviate this issue, two-dimensional (2D) axisymmetric and three-dimensional (3D) computational fluid dynamics (CFD) simulations of the test rigs have been performed. The axisymmetric conditions of the 2D simulations produce unphysical solutions; however, the full 3D simulations do not exhibit these behaviors. There is reasonable agreement for the predicted temperatures, heat fluxes, and heat transfer coefficients when comparing the boiling case of the 1D thermal resistance networks and the CFD simulations
Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review
Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual
participant data. For continuous outcomes, especially those with naturally skewed distributions, summary
information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal,
we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis.
Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with
missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane
Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited
reference searching and emailed topic experts to identify recent methodological developments. Details recorded
included the description of the method, the information required to implement the method, any underlying
assumptions and whether the method could be readily applied in standard statistical software. We provided a
summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios.
Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in
addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis
level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical
approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following
screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and
three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when
replacing a missing SD the approximation using the range minimised loss of precision and generally performed better
than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile
performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials
gave superior results.
Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median)
reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or
variability summary statistics within meta-analyses
- …