12,924 research outputs found
Eigenvalue Separation in Some Random Matrix Models
The eigenvalue density for members of the Gaussian orthogonal and unitary
ensembles follows the Wigner semi-circle law. If the Gaussian entries are all
shifted by a constant amount c/Sqrt(2N), where N is the size of the matrix, in
the large N limit a single eigenvalue will separate from the support of the
Wigner semi-circle provided c > 1. In this study, using an asymptotic analysis
of the secular equation for the eigenvalue condition, we compare this effect to
analogous effects occurring in general variance Wishart matrices and matrices
from the shifted mean chiral ensemble. We undertake an analogous comparative
study of eigenvalue separation properties when the size of the matrices are
fixed and c goes to infinity, and higher rank analogues of this setting. This
is done using exact expressions for eigenvalue probability densities in terms
of generalized hypergeometric functions, and using the interpretation of the
latter as a Green function in the Dyson Brownian motion model. For the shifted
mean Gaussian unitary ensemble and its analogues an alternative approach is to
use exact expressions for the correlation functions in terms of classical
orthogonal polynomials and associated multiple generalizations. By using these
exact expressions to compute and plot the eigenvalue density, illustrations of
the various eigenvalue separation effects are obtained.Comment: 25 pages, 9 figures include
Expansion Potential for Irrigation within the Mississippi Delta Region
17.6 million acres, or 73 percent, of the Mississippi Delta Region is currently cropland and possesses the physical characteristics of slope, texture and soil type which are recommended for irrigation. Economic feasibility of expanding irrigation by flood, furrow and center pivot methods were examined under 24 scenarios representing two sets of crop prices, yield levels, production costs, opportunity costs and six crop rotations. Irrigation was economically feasible for 56 to 100 percent of the cropland across all scenarios. Approximately 88 percent of the cropland can be economically irrigated with flood or furrow in its present form, 8 percent yield highest net returns if furrow irrigated following land forming and 4 percent can be economically irrigated only with center pivot systems
Weighing Neutrinos with Galaxy Cluster Surveys
Large future galaxy cluster surveys, combined with cosmic microwave
background observations, can achieve a high sensitivity to the masses of
cosmologically important neutrinos. We show that a weak lensing selected sample
of ~100,000 clusters could tighten the current upper bound on the sum of masses
of neutrino species by an order of magnitude, to a level of 0.03 eV. Since this
statistical sensitivity is below the best existing lower limit on the mass of
at least one neutrino species, a future detection is likely, provided that
systematic errors can be controlled to a similar level.Comment: 4 pages, 1 figure, version accepted for publication in PR
Construction of the Soudan 2 detector
Progress in the construction of the Soudan 2 nucleon decay detector which is being built at the Soudan iron mine in Minnesota is discussed. The expected event rate and characteristics of low energy neutrino events, muon events, multiple muon events, and other cosmic ray phenomena are discussed
Modular networks emerge from multiconstraint optimization
Modular structure is ubiquitous among complex networks. We note that most
such systems are subject to multiple structural and functional constraints,
e.g., minimizing the average path length and the total number of links, while
maximizing robustness against perturbations in node activity. We show that the
optimal networks satisfying these three constraints are characterized by the
existence of multiple subnetworks (modules) sparsely connected to each other.
In addition, these modules have distinct hubs, resulting in an overall
heterogeneous degree distribution.Comment: 5 pages, 4 figures; Published versio
Some Like It Hot: Linking Diffuse X-ray Luminosity, Baryonic Mass, and Star Formation Rate in Compact Groups of Galaxies
We present an analysis of the diffuse X-ray emission in 19 compact groups of
galaxies (CGs) observed with Chandra. The hottest, most X-ray luminous CGs
agree well with the galaxy cluster X-ray scaling relations in and
, even in CGs where the hot gas is associated with only the
brightest galaxy. Using Spitzer photometry, we compute stellar masses and
classify HCGs 19, 22, 40, and 42 and RSCGs 32, 44, and 86 as fossil groups
using a new definition for fossil systems that includes a broader range of
masses. We find that CGs with total stellar and HI masses
M are often X-ray luminous, while lower-mass CGs only sometimes exhibit
faint, localized X-ray emission. Additionally, we compare the diffuse X-ray
luminosity against both the total UV and 24 m star formation rates of each
CG and optical colors of the most massive galaxy in each of the CGs. The most
X-ray luminous CGs have the lowest star formation rates, likely because there
is no cold gas available for star formation, either because the majority of the
baryons in these CGs are in stars or the X-ray halo, or due to gas stripping
from the galaxies in CGs with hot halos. Finally, the optical colors that trace
recent star formation histories of the most massive group galaxies do not
correlate with the X-ray luminosities of the CGs, indicating that perhaps the
current state of the X-ray halos is independent of the recent history of
stellar mass assembly in the most massive galaxies.Comment: 20 pages, 7 figures, accepted for publication in Ap
Rethinking the patient: using Burden of Treatment Theory to understand the changing dynamics of illness
<b>Background</b> In this article we outline Burden of Treatment Theory, a new model of the relationship between sick people, their social networks, and healthcare services. Health services face the challenge of growing populations with long-term and life-limiting conditions, they have responded to this by delegating to sick people and their networks routine work aimed at managing symptoms, and at retarding - and sometimes preventing - disease progression. This is the new proactive work of patient-hood for which patients are increasingly accountable: founded on ideas about self-care, self-empowerment, and self-actualization, and on new technologies and treatment modalities which can be shifted from the clinic into the community. These place new demands on sick people, which they may experience as burdens of treatment.<p></p>
<b>Discussion</b> As the burdens accumulate some patients are overwhelmed, and the consequences are likely to be poor healthcare outcomes for individual patients, increasing strain on caregivers, and rising demand and costs of healthcare services. In the face of these challenges we need to better understand the resources that patients draw upon as they respond to the demands of both burdens of illness and burdens of treatment, and the ways that resources interact with healthcare utilization.<p></p>
<b>Summary</b> Burden of Treatment Theory is oriented to understanding how capacity for action interacts with the work that stems from healthcare. Burden of Treatment Theory is a structural model that focuses on the work that patients and their networks do. It thus helps us understand variations in healthcare utilization and adherence in different healthcare settings and clinical contexts
The Clumping Transition in Niche Competition: a Robust Critical Phenomenon
We show analytically and numerically that the appearance of lumps and gaps in
the distribution of n competing species along a niche axis is a robust
phenomenon whenever the finiteness of the niche space is taken into account. In
this case depending if the niche width of the species is above or
below a threshold , which for large n coincides with 2/n, there are
two different regimes. For the lumpy pattern emerges
directly from the dominant eigenvector of the competition matrix because its
corresponding eigenvalue becomes negative. For the lumpy
pattern disappears. Furthermore, this clumping transition exhibits critical
slowing down as is approached from above. We also find that the number
of lumps of species vs. displays a stair-step structure. The positions
of these steps are distributed according to a power-law. It is thus
straightforward to predict the number of groups that can be packed along a
niche axis and it coincides with field measurements for a wide range of the
model parameters.Comment: 16 pages, 7 figures;
http://iopscience.iop.org/1742-5468/2010/05/P0500
On the Prior Sensitivity of Thompson Sampling
The empirically successful Thompson Sampling algorithm for stochastic bandits
has drawn much interest in understanding its theoretical properties. One
important benefit of the algorithm is that it allows domain knowledge to be
conveniently encoded as a prior distribution to balance exploration and
exploitation more effectively. While it is generally believed that the
algorithm's regret is low (high) when the prior is good (bad), little is known
about the exact dependence. In this paper, we fully characterize the
algorithm's worst-case dependence of regret on the choice of prior, focusing on
a special yet representative case. These results also provide insights into the
general sensitivity of the algorithm to the choice of priors. In particular,
with being the prior probability mass of the true reward-generating model,
we prove and regret upper bounds for the
bad- and good-prior cases, respectively, as well as \emph{matching} lower
bounds. Our proofs rely on the discovery of a fundamental property of Thompson
Sampling and make heavy use of martingale theory, both of which appear novel in
the literature, to the best of our knowledge.Comment: Appears in the 27th International Conference on Algorithmic Learning
Theory (ALT), 201
- …