1,801 research outputs found
Evidence for a Weak Galactic Center Magnetic Field from Diffuse Low Frequency Nonthermal Radio Emission
New low-frequency 74 and 330 MHz observations of the Galactic center (GC)
region reveal the presence of a large-scale (6\arcdeg\times 2\arcdeg) diffuse
source of nonthermal synchrotron emission. A minimum energy analysis of this
emission yields a total energy of ergs
and a magnetic field strength of \muG (where is
the proton to electron energy ratio and is the filling factor of the
synchrotron emitting gas). The equipartition particle energy density is
\evcm, a value consistent with cosmic-ray data. However,
the derived magnetic field is several orders of magnitude below the 1 mG field
commonly invoked for the GC. With this field the source can be maintained with
the SN rate inferred from the GC star formation. Furthermore, a strong magnetic
field implies an abnormally low GC cosmic-ray energy density. We conclude that
the mean magnetic field in the GC region must be weak, of order 10 \muG (at
least on size scales \ga 125\arcsec).Comment: 12 pages, 1 JPEG figure, uses aastex.sty; Accepted for publication,
ApJL (2005, published
Data informed physical models for district heating grids with distributed heat sources to understand thermal and hydraulic aspects
The aim of the study was to develop data informed physical models for simulation of district heating (DH) grids for better presentation of hydraulic and thermal aspects in the DH grids integrating heat prosumers. A DH grid organized as a ring and integrating a heat prosumer from a data center was analyzed. In this study, an extensive analysis for thermal and hydraulic aspects of theDH grid considering different configurations of distributed sources was performed. Different configurations for the prosumer connection, the return to return and the return to supply, together with the pressure and temperature control, were investigated. The results showed that increasing the share of renewable heat from the prosumer to the DH grid caused a pressure imbalance in substations close to it. Variable speed pump control was the solution for these issues and it gave up to 34% electricity savings. Lowering temperature levels in the DH network led to a decrease in DH heat losses of up to 14%. The return to supply configuration showed advantages in integrating the prosumer, as regards lower return temperatures and better waste heat utilization. The results indicated the main hydraulic and thermal features of integrating the prosumer in the DH grid
An Evolving Entropy Floor in the Intracluster Gas?
Non-gravitational processes, such as feedback from galaxies and their active
nuclei, are believed to have injected excess entropy into the intracluster gas,
and therefore to have modified the density profiles in galaxy clusters during
their formation. Here we study a simple model for this so-called preheating
scenario, and ask (i) whether it can simultaneously explain both global X-ray
scaling relations and number counts of galaxy clusters, and (ii) whether the
amount of entropy required evolves with redshift. We adopt a baseline entropy
profile that fits recent hydrodynamic simulations, modify the hydrostatic
equilibrium condition for the gas by including approx. 20% non-thermal pressure
support, and add an entropy floor K_0 that is allowed to vary with redshift. We
find that the observed luminosity-temperature (L-T) relations of low-redshift
(z=0.05) HIFLUGCS clusters and high-redshift (z=0.8) WARPS clusters are best
simultaneously reproduced with an evolving entropy floor of
K_0(z)=341(1+z)^{-0.83}h^{-1/3} keV cm^2. If we restrict our analysis to the
subset of bright (kT > 3 keV) clusters, we find that the evolving entropy floor
can mimic a self-similar evolution in the L-T scaling relation. This degeneracy
with self-similar evolution is, however, broken when (0.5 < kT < 3 keV)
clusters are also included. The approx. 60% entropy increase we find from z=0.8
to z=0.05 is roughly consistent with that expected if the heating is provided
by the evolving global quasar population. Using the cosmological parameters
from the WMAP 3-year data with sigma_8=0.76, our best-fit model underpredicts
the number counts of the X-ray galaxy clusters compared to those derived from
the 158 deg^2 ROSAT PSPC survey. Treating sigma_8 as a free parameter, we find
a best-fit value of sigma_8=0.80+/- 0.02.Comment: 14 emulateapj pages with 9 figures, submitted to Ap
Slow Spread of the Aggressive Invader, Microstegium vimineum (Japanese Stiltgrass)
Microstegium vimineum (Japanese stiltgrass) is a non-native weed whose rapid invasion threatens native diversity and regeneration in forests. Using data from a 4 year experiment tracking new invasions in different habitats, we developed a spatial model of patch growth, using maximum likelihood techniques to estimate dispersal and population growth parameters. The patches expanded surprisingly slowly: in the final year, the majority of new seedlings were still within 1 m of the original patch. The influence of habitat was not as strong as anticipated, although patches created in roadside and wet meadow habitats tended to expand more rapidly and had greater reproductive ratios. The long-term projections of the patch growth model suggest much slower spread than has typically been observed for M. vimineum. The small scale of natural dispersal suggests that human-mediated dispersal, likely influenced by forest road management, is responsible for the rapid spread of this invasive species
The social value of a QALY : raising the bar or barring the raise?
Background: Since the inception of the National Institute for Health and Clinical Excellence (NICE) in England,
there have been questions about the empirical basis for the cost-per-QALY threshold used by NICE and whether
QALYs gained by different beneficiaries of health care should be weighted equally. The Social Value of a QALY
(SVQ) project, reported in this paper, was commissioned to address these two questions. The results of SVQ were
released during a time of considerable debate about the NICE threshold, and authors with differing perspectives
have drawn on the SVQ results to support their cases. As these discussions continue, and given the selective use of
results by those involved, it is important, therefore, not only to present a summary overview of SVQ, but also for
those who conducted the research to contribute to the debate as to its implications for NICE.
Discussion: The issue of the threshold was addressed in two ways: first, by combining, via a set of models, the
current UK Value of a Prevented Fatality (used in transport policy) with data on fatality age, life expectancy and
age-related quality of life; and, second, via a survey designed to test the feasibility of combining respondentsā
answers to willingness to pay and health state utility questions to arrive at values of a QALY. Modelling resulted in
values of Ā£10,000-Ā£70,000 per QALY. Via survey research, most methods of aggregating the data resulted in values
of a QALY of Ā£18,000-Ā£40,000, although others resulted in implausibly high values. An additional survey, addressing
the issue of weighting QALYs, used two methods, one indicating that QALYs should not be weighted and the
other that greater weight could be given to QALYs gained by some groups.
Summary: Although we conducted only a feasibility study and a modelling exercise, neither present compelling
evidence for moving the NICE threshold up or down. Some preliminary evidence would indicate it could be
moved up for some types of QALY and down for others. While many members of the public appear to be open to
the possibility of using somewhat different QALY weights for different groups of beneficiaries, we do not yet have
any secure evidence base for introducing such a system
Towards More Accurate Molecular Dynamics Calculation of Thermal Conductivity. Case Study: GaN Bulk Crystals
Significant differences exist among literature for thermal conductivity of
various systems computed using molecular dynamics simulation. In some cases,
unphysical results, for example, negative thermal conductivity, have been
found. Using GaN as an example case and the direct non-equilibrium method,
extensive molecular dynamics simulations and Monte Carlo analysis of the
results have been carried out to quantify the uncertainty level of the
molecular dynamics methods and to identify the conditions that can yield
sufficiently accurate calculations of thermal conductivity. We found that the
errors of the calculations are mainly due to the statistical thermal
fluctuations. Extrapolating results to the limit of an infinite-size system
tend to magnify the errors and occasionally lead to unphysical results. The
error in bulk estimates can be reduced by performing longer time averages using
properly selected systems over a range of sample lengths. If the errors in the
conductivity estimates associated with each of the sample lengths are kept
below a certain threshold, the likelihood of obtaining unphysical bulk values
becomes insignificant. Using a Monte-Carlo approach developed here, we have
determined the probability distributions for the bulk thermal conductivities
obtained using the direct method. We also have observed a nonlinear effect that
can become a source of significant errors. For the extremely accurate results
presented here, we predict a [0001] GaN thermal conductivity of 185 at 300 K, 102 at 500 K, and 74
at 800 K. Using the insights obtained in the work, we have achieved a
corresponding error level (standard deviation) for the bulk (infinite sample
length) GaN thermal conductivity of less than 10 , 5 , and 15 at 300 K, 500 K, and 800 K respectively
The neural basis of hot and cold cognition in depressed patients, unaffected relatives, and low -risk healthy controls: An fMRI investigation
BACKGROUND: Modern cognitive neuropsychological models of depression posit that negatively biased emotional (āhotā) processing confers risk for depression, while preserved executive function (ācoldā) cognition promotes resilience. METHODS: We compared neural responses during hot and cold cognitive tasks in 99 individuals: those at familial risk for depression (N = 30 unaffected first-degree relatives of depressed individuals) and those currently experiencing a major depressive episode (N = 39 unmedicated depressed patients) with low-risk healthy controls (N = 30). Primary analyses assessed neural activation on two functional magnetic resonance imaging tasks previously associated with depression: dorsolateral prefrontal cortex (DLPFC) responsivity during the n-back working memory task; and amygdala and subgenual anterior cingulate cortex (sgACC) responsivity during incidental emotional face processing. RESULTS: Depressed patients exhibited significantly attenuated working memory-related DLPFC activation, compared to low-risk controls and unaffected relatives; unaffected relatives did not differ from low-risk controls. We did not observe a complementary pattern during emotion processing. However, we found preliminary support that greater DLPFC activation was associated with lower amygdala response during emotion processing. LIMITATIONS: These findings require confirmation in a longitudinal study to observe each individual's risk of developing depression; without this, we cannot identify the true risk level of the first-degree relative or low-risk control group. CONCLUSIONS: These findings have implications for understanding the neural mechanisms of risk and resilience in depression: they are consistent with the suggestion that preserved executive function might confer resilience to developing depression in first-degree relatives of depressed patients
- ā¦