29,236 research outputs found
On the intersection of free subgroups in free products of groups
Let (G_i | i in I) be a family of groups, let F be a free group, and let G =
F *(*I G_i), the free product of F and all the G_i. Let FF denote the set of
all finitely generated subgroups H of G which have the property that, for each
g in G and each i in I, H \cap G_i^{g} = {1}. By the Kurosh Subgroup Theorem,
every element of FF is a free group. For each free group H, the reduced rank of
H is defined as r(H) = max{rank(H) -1, 0} in \naturals \cup {\infty} \subseteq
[0,\infty]. To avoid the vacuous case, we make the additional assumption that
FF contains a non-cyclic group, and we define sigma := sup{r(H\cap
K)/(r(H)r(K)) : H, K in FF and r(H)r(K) \ne 0}, sigma in [1,\infty]. We are
interested in precise bounds for sigma. In the special case where I is empty,
Hanna Neumann proved that sigma in [1,2], and conjectured that sigma = 1;
almost fifty years later, this interval has not been reduced. With the
understanding that \infty/(\infty -2) = 1, we define theta := max{|L|/(|L|-2) :
L is a subgroup of G and |L| > 2}, theta in [1,3]. Generalizing Hanna Neumann's
theorem, we prove that sigma in [theta, 2 theta], and, moreover, sigma = 2
theta if G has 2-torsion. Since sigma is finite, FF is closed under finite
intersections. Generalizing Hanna Neumann's conjecture, we conjecture that
sigma = theta whenever G does not have 2-torsion.Comment: 28 pages, no figure
Quantification of myeloperoxidase from human granulocytes as an inflammation marker by enzyme.linked immunosorbent assay
Measurement of the total energy of an isolated system by an internal observer
We consider the situation in which an observer internal to an isolated system
wants to measure the total energy of the isolated system (this includes his own
energy, that of the measuring device and clocks used, etc...). We show that he
can do this in an arbitrarily short time, as measured by his own clock. This
measurement is not subjected to a time-energy uncertainty relation. The
properties of such measurements are discussed in detail with particular
emphasis on the relation between the duration of the measurement as measured by
internal clocks versus external clocks.Comment: 7 pages, 1 figur
Is MS1054-03 an exceptional cluster? A new investigation of ROSAT/HRI X-ray data
We reanalyzed the ROSAT/HRI observation of MS1054-03, optimizing the channel
HRI selection and including a new exposure of 68 ksec. From a wavelet analysis
of the HRI image we identify the main cluster component and find evidence for
substructure in the west, which might either be a group of galaxies falling
onto the cluster or a foreground source. Our 1-D and 2-D analysis of the data
show that the cluster can be fitted well by a classical betamodel centered only
20arcsec away from the central cD galaxy. The core radius and beta values
derived from the spherical model(beta = 0.96_-0.22^+0.48) and the elliptical
model (beta = 0.73+/-0.18) are consistent. We derived the gas mass and total
mass of the cluster from the betamodel fit and the previously published ASCA
temperature (12.3^{+3.1}_{-2.2} keV). The gas mass fraction at the virial
radius is fgas = (14[-3,+2.5]+/-3)% for Omega_0=1, where the errors in brackets
come from the uncertainty on the temperature and the remaining errors from the
HRI imaging data. The gas mass fraction computed for the best fit ASCA
temperature is significantly lower than found for nearby hot clusters,
fgas=20.1pm 1.6%. This local value can be matched if the actual virial
temperature of MS1054-032 were close to the lower ASCA limit (~10keV) with an
even lower value of 8 keV giving the best agreement. Such a bias between the
virial and measured temperature could be due to the presence of shock waves in
the intracluster medium stemming from recent mergers. Another possibility, that
reconciles a high temperature with the local gas mass fraction, is the
existence of a non zero cosmological constant.Comment: 12 pages, 5 figures, accepted for publication in Ap
Unexpected reemergence of von Neumann theorem
Is is shown here that the "simple test of quantumness for a single system" of
arXiv:0704.1962 (for a recent experimental realization see arXiv:0804.1646) has
exactly the same relation to the discussion of to the problem of describing the
quantum system via a classical probabilistic scheme (that is in terms of hidden
variables, or within a realistic theory) as the von Neumann theorem (1932). The
latter one was shown by Bell (1966) to stem from an assumption that the hidden
variable values for a sum of two non-commuting observables (which is an
observable too) have to be, for each individual system, equal to sums of
eigenvalues of the two operators. One cannot find a physical justification for
such an assumption to hold for non-commeasurable variables. On the positive
side. the criterion may be useful in rejecting models which are based on
stochastic classical fields. Nevertheless the example used by the Authors has a
classical optical realization
Back action of graphene charge detectors on graphene and carbon nanotube quantum dots
We report on devices based on graphene charge detectors (CDs) capacitively
coupled to graphene and carbon nanotube quantum dots (QDs). We focus on back
action effects of the CD on the probed QD. A strong influence of the bias
voltage applied to the CD on the current through the QD is observed. Depending
on the charge state of the QD the current through the QD can either strongly
increase or completely reverse as a response to the applied voltage on the CD.
To describe the observed behavior we employ two simple models based on single
electron transport in QDs with asymmetrically broadened energy distributions of
the source and the drain leads. The models successfully explain the back action
effects. The extracted distribution broadening shows a linear dependency on the
bias voltage applied to the CD. We discuss possible mechanisms mediating the
energy transfer between the CD and QD and give an explanation for the origin of
the observed asymmetry.Comment: 6 pages, 4 figure
Bayesian value-of-infomation analysis: an application to a policy model of Alzheimer's disease
A framework is presented that distinguishes the conceptually separate decisions of which treatment strategy is optimal from the question of whether more information is required to inform this choice in the future. The authors argue that the choice of treatment strategy should be based on expected utility, and the only valid reason to characterize the uncertainty surrounding outcomes of interest is to establish the value of acquiring additional information. A Bayesian decision theoretic approach is demonstrated through a probabilistic analysis of a published policy model of Alzheimer’s disease. The expected value of perfect information is estimated for the decision to adopt a new pharmaceutical for the population of patients with Alzheimer’s disease in the United States. This provides an upper bound on the value of additional research. The value of information is also estimated for each of the model inputs. This analysis can focus future research by identifying those parameters where more precise estimates would be most valuable and indicating whether an experimental design would be required. We also discuss how this type of analysis can also be used to design experimental research efficiently (identifying optimal sample size and optimal sample allocation) based on the marginal cost and marginal benefit of sample information. Value-of-information analysis can provide a measure of the expected payoff from proposed research, which can be used to set priorities in research and development. It can also inform an efficient regulatory framework for new healthcare technologies: an analysis of the value of information would define when a claim for a new technology should be deemed substantiated and when evidence should be considered competent and reliable when it is not cost-effective to gather any more information
Solution Structures of \u3cem\u3eMycobacterium tuberculosis\u3c/em\u3e Thioredoxin C and Models of Intact Thioredoxin System Suggest New Approaches to Inhibitor and Drug Design
Here, we report the NMR solution structures of Mycobacterium tuberculosis (M. tuberculosis) thioredoxin C in both oxidized and reduced states, with discussion of structural changes that occur in going between redox states. The NMR solution structure of the oxidized TrxC corresponds closely to that of the crystal structure, except in the C-terminal region. It appears that crystal packing effects have caused an artifactual shift in the α4 helix in the previously reported crystal structure, compared with the solution structure. On the basis of these TrxC structures, chemical shift mapping, a previously reported crystal structure of the M. tuberculosis thioredoxin reductase (not bound to a Trx) and structures for intermediates in the E. coli thioredoxin catalytic cycle, we have modeled the complete M. tuberculosis thioredoxin system for the various steps in the catalytic cycle. These structures and models reveal pockets at the TrxR/TrxC interface in various steps in the catalytic cycle, which can be targeted in the design of uncompetitive inhibitors as potential anti-mycobacterial agents, or as chemical genetic probes of function
- …