25,278 research outputs found
Euler Integration of Gaussian Random Fields and Persistent Homology
In this paper we extend the notion of the Euler characteristic to persistent
homology and give the relationship between the Euler integral of a function and
the Euler characteristic of the function's persistent homology. We then proceed
to compute the expected Euler integral of a Gaussian random field using the
Gaussian kinematic formula and obtain a simple closed form expression. This
results in the first explicitly computable mean of a quantitative descriptor
for the persistent homology of a Gaussian random field.Comment: 21 pages, 1 figur
Comparing D-Bar and Common Regularization-Based Methods for Electrical Impedance Tomography
Objective: To compare D-bar difference reconstruction with regularized linear reconstruction in electrical impedance tomography. Approach: A standard regularized linear approach using a Laplacian penalty and the GREIT method for comparison to the D-bar difference images. Simulated data was generated using a circular phantom with small objects, as well as a \u27Pac-Man\u27 shaped conductivity target. An L-curve method was used for parameter selection in both D-bar and the regularized methods. Main results: We found that the D-bar method had a more position independent point spread function, was less sensitive to errors in electrode position and behaved differently with respect to additive noise than the regularized methods. Significance: The results allow a novel pathway between traditional and D-bar algorithm comparison
On Measurement of Helicity Parameters in Top Quark Decay
To enable an evaluation of future measurements of the helicity parameters for
" t --> W b " decay in regard to " T_FS violation", this paper considers the
effects of an additional pure-imaginary coupling, (i g/2 Lambda) or (i g),
associated with a specific, single additional Lorentz structure, i = S, P, S +
P, ... Sizable " T_FS violation" signatures can occur for low-effective mass
scales (< 320 GeV), but in most cases can be more simply excluded by 10%
precision measurement of the probabilities P(W_L) and P(b_L). Signatures for
excluding the presence of " T_FS violation" associated with the two dynamical
phase-type ambiguities are investigated.Comment: 15 pages, 1 table, 7 figures, no macro
A mathematical optimisation model of a New Zealand dairy farm: The integrated dairy enterprise (IDEA) framework
Optimisation models are a key tool for the analysis of emerging policies, price sets, and technologies within grazing systems. A detailed nonlinear optimisation model of a New Zealand dairy farming system is described. The framework is notable for its rich portrayal of pasture and cow biology that add substantial descriptive power to standard approaches. Key processes incorporated in the model include: (1) pasture growth and digestibility that differ with residual pasture mass and rotation length, (2) pasture utilisation that varies by stocking rate, and (3) different levels of intake regulation. Model output is shown to closely match data from a more detailed simulation model (deviations between 0 and 5 per cent) and survey data (deviations between 1 and 11 per cent), providing confidence in its predictive capacity. Use of the model is demonstrated in an empirical application investigating the relative profitability of production systems involving different amounts of imported feed under price variation. The case study indicates superior profitability associated with the use of a moderate level of imported supplement, with Operating Profit ($NZ ha-1) of 934, 926, 1186, 1314, and 1093 when imported feed makes up 0, 5, 10, 20 and 30 per cent of the diet, respectively. Stocking rate and milk production per cow increase by 35 and 29 per cent, respectively, as the proportion of imported feed increases from 0 to 30 per cent of the diet. Pasture utilisation increases with stocking rate. Accordingly, pasture eaten and nitrogen fertiliser application increase by 20 and 213 per cent, respectively, as the proportion of imported feed increases from 0 to 30 per cent of the diet
High transverse momentum suppression and surface effects in Cu+Cu and Au+Au collisions within the PQM model
We study parton suppression effects in heavy-ion collisions within the Parton
Quenching Model (PQM). After a brief summary of the main features of the model,
we present comparisons of calculations for the nuclear modification and the
away-side suppression factor to data in Au+Au and Cu+Cu collisions at 200 GeV.
We discuss properties of light hadron probes and their sensitivity to the
medium density within the PQM Monte Carlo framework.Comment: Comments: 6 pages, 8 figures. To appear in the proceedings of Hot
Quarks 2006: Workshop for Young Scientists on the Physics of
Ultrarelativistic Nucleus-Nucleus Collisions, Villasimius, Italy, 15-20 May
200
Normalization of Collisional Decoherence: Squaring the Delta Function, and an Independent Cross-Check
We show that when the Hornberger--Sipe calculation of collisional decoherence
is carried out with the squared delta function a delta of energy instead of a
delta of the absolute value of momentum, following a method introduced by
Di\'osi, the corrected formula for the decoherence rate is simply obtained. The
results of Hornberger and Sipe and of Di\'osi are shown to be in agreement. As
an independent cross-check, we calculate the mean squared coordinate diffusion
of a hard sphere implied by the corrected decoherence master equation, and show
that it agrees precisely with the same quantity as calculated by a classical
Brownian motion analysis.Comment: Tex: 14 pages 7/30/06: revisions to introduction, and references
added 9/29/06: further minor revisions and references adde
Schwinger Algebra for Quaternionic Quantum Mechanics
It is shown that the measurement algebra of Schwinger, a characterization of
the properties of Pauli measurements of the first and second kinds, forming the
foundation of his formulation of quantum mechanics over the complex field, has
a quaternionic generalization. In this quaternionic measurement algebra some of
the notions of quaternionic quantum mechanics are clarified. The conditions
imposed on the form of the corresponding quantum field theory are studied, and
the quantum fields are constructed. It is shown that the resulting quantum
fields coincide with the fermion or boson annihilation-creation operators
obtained by Razon and Horwitz in the limit in which the number of particles in
physical states .Comment: 20 pages, Plain Te
Breaking quantum linearity: constraints from human perception and cosmological implications
Resolving the tension between quantum superpositions and the uniqueness of
the classical world is a major open problem. One possibility, which is
extensively explored both theoretically and experimentally, is that quantum
linearity breaks above a given scale. Theoretically, this possibility is
predicted by collapse models. They provide quantitative information on where
violations of the superposition principle become manifest. Here we show that
the lower bound on the collapse parameter lambda, coming from the analysis of
the human visual process, is ~ 7 +/- 2 orders of magnitude stronger than the
original bound, in agreement with more recent analysis. This implies that the
collapse becomes effective with systems containing ~ 10^4 - 10^5 nucleons, and
thus falls within the range of testability with present-day technology. We also
compare the spectrum of the collapsing field with those of known cosmological
fields, showing that a typical cosmological random field can yield an efficient
wave function collapse.Comment: 13 pages, LaTeX, 3 figure
Probability distribution of the maximum of a smooth temporal signal
We present an approximate calculation for the distribution of the maximum of
a smooth stationary temporal signal X(t). As an application, we compute the
persistence exponent associated to the probability that the process remains
below a non-zero level M. When X(t) is a Gaussian process, our results are
expressed explicitly in terms of the two-time correlation function,
f(t)=.Comment: Final version (1 major typo corrected; better introduction). Accepted
in Phys. Rev. Let
Rain estimation from satellites: An examination of the Griffith-Woodley technique
The Griffith-Woodley Technique (GWT) is an approach to estimating precipitation using infrared observations of clouds from geosynchronous satellites. It is examined in three ways: an analysis of the terms in the GWT equations; a case study of infrared imagery portraying convective development over Florida; and the comparison of a simplified equation set and resultant rain map to results using the GWT. The objective is to determine the dominant factors in the calculation of GWT rain estimates. Analysis of a single day's convection over Florida produced a number of significant insights into various terms in the GWT rainfall equations. Due to the definition of clouds by a threshold isotherm the majority of clouds on this day did not go through an idealized life cycle before losing their identity through merger, splitting, etc. As a result, 85% of the clouds had a defined life of 0.5 or 1 h. For these clouds the terms in the GWT which are dependent on cloud life history become essentially constant. The empirically derived ratio of radar echo area to cloud area is given a singular value (0.02) for 43% of the sample, while the rainrate term is 20.7 mmh-1 for 61% of the sample. For 55% of the sampled clouds the temperature weighting term is identically 1.0. Cloud area itself is highly correlated (r=0.88) with GWT computed rain volume. An important, discriminating parameter in the GWT is the temperature defining the coldest 10% cloud area. The analysis further shows that the two dominant parameters in rainfall estimation are the existence of cold cloud and the duration of cloud over a point
- âŠ