10,264 research outputs found
Evidence accumulation in a Laplace domain decision space
Evidence accumulation models of simple decision-making have long assumed that
the brain estimates a scalar decision variable corresponding to the
log-likelihood ratio of the two alternatives. Typical neural implementations of
this algorithmic cognitive model assume that large numbers of neurons are each
noisy exemplars of the scalar decision variable. Here we propose a neural
implementation of the diffusion model in which many neurons construct and
maintain the Laplace transform of the distance to each of the decision bounds.
As in classic findings from brain regions including LIP, the firing rate of
neurons coding for the Laplace transform of net accumulated evidence grows to a
bound during random dot motion tasks. However, rather than noisy exemplars of a
single mean value, this approach makes the novel prediction that firing rates
grow to the bound exponentially, across neurons there should be a distribution
of different rates. A second set of neurons records an approximate inversion of
the Laplace transform, these neurons directly estimate net accumulated
evidence. In analogy to time cells and place cells observed in the hippocampus
and other brain regions, the neurons in this second set have receptive fields
along a "decision axis." This finding is consistent with recent findings from
rodent recordings. This theoretical approach places simple evidence
accumulation models in the same mathematical language as recent proposals for
representing time and space in cognitive models for memory.Comment: Revised for CB
Logic analysis of complex systems by characterizing failure phenomena to achieve diagnosis and fault-isolation
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable
Sparse Estimation with the Swept Approximated Message-Passing Algorithm
Approximate Message Passing (AMP) has been shown to be a superior method for
inference problems, such as the recovery of signals from sets of noisy,
lower-dimensionality measurements, both in terms of reconstruction accuracy and
in computational efficiency. However, AMP suffers from serious convergence
issues in contexts that do not exactly match its assumptions. We propose a new
approach to stabilizing AMP in these contexts by applying AMP updates to
individual coefficients rather than in parallel. Our results show that this
change to the AMP iteration can provide theoretically expected, but hitherto
unobtainable, performance for problems on which the standard AMP iteration
diverges. Additionally, we find that the computational costs of this swept
coefficient update scheme is not unduly burdensome, allowing it to be applied
efficiently to signals of large dimensionality.Comment: 11 pages, 3 figures, implementation available at
https://github.com/eric-tramel/SwAMP-Dem
Hard Matching for Boosted Tops at Two Loops
Cross sections for top quarks provide very interesting physics opportunities,
being both sensitive to new physics and also perturbatively tractable due to
the large top quark mass. Rigorous factorization theorems for top cross
sections can be derived in several kinematic scenarios, including the boosted
regime in the peak region that we consider here. In the context of the
corresponding factorization theorem for collisions we extract the last
missing ingredient that is needed to evaluate the cross section differential in
the jet-mass at two-loop order, namely the matching coefficient at the scale
. Our extraction also yields the final ingredients needed to
carry out logarithmic resummation at next-to-next-to-leading logarithmic order
(or NLL if we ignore the missing 4-loop cusp anomalous dimension). This
coefficient exhibits an amplitude level rapidity logarithm starting at
due to virtual top quark loops, which we treat using
rapidity renormalization group (RG) evolution. Interestingly, this rapidity RG
evolution appears in the matching coefficient between two effective theories
around the heavy quark mass scale .Comment: 35 pages, 3 figures, v2: added extraction of 3-loop anon. dimension,
journal versio
Infrared Renormalization Group Flow for Heavy Quark Masses
A short-distance heavy quark mass depends on two parameters, the
renormalization scale mu controlling the absorption of ultraviolet fluctuations
into the mass, and a scale R controlling the absorption of infrared
fluctuations. 1/R can be thought of as the radius for perturbative corrections
that build up the mass beyond its point-like definition in the pole scheme.
Treating R as a variable gives a renormalization group equation. We argue that
the sign of this anomalous dimension is universal: increasing R to add IR modes
decreases m(R). The flow improves the stability of conversions between mass
schemes, allowing us to avoid large logs and the renormalon. The flow in R can
be used to study IR renormalons without using bubble chains, and we use it to
determine the coefficient of the LambdaQCD renormalon ambiguity of the pole
mass with a convergent sum-rule.Comment: 4 pages, 2 figures, Added explicit result for the top MSbar mass with
uncertaintie
Factorization Approach for Top Mass Reconstruction at High Energies
Using effective theories for jets and heavy quarks it is possible to prove
that the double differential top-antitop invariant mass distribution for the
process in the resonance region for c.m. energies much
larger than the top mass can factorized into perturbatively computable hard
coefficients and jet functions and a non-perturbative soft function. For
invariant mass prescriptions based on hemispheres defined with respect to the
thrust axis the soft function can be extracted from massless jet event shape
distributions. This approach allows in principle for top mass determinations
without uncertainties from hadronization using the reconstruction method and to
quantify the top mass scheme dependence of the measured top quark mass value.Comment: Talk given at 2007 International Linear Collider Workshop (LCWS07 and
ILC07), Hamburg, Germany, 30 May - 3 Jun 2007, 7 pages, 4 figures, title
modifie
- …