3,123 research outputs found
Pulsed radiolysis of model aromatic polymers and epoxy based matrix materials
Models of primary processes leading to deactivation of energy deposited by a pulse of high energy electrons were derived for epoxy matrix materials and polyl-vinyl naphthalene. The basic conclusion is that recombination of initially formed charged states is complete within 1 nanosecond, and subsequent degradation chemistry is controlled by the reactivity of these excited states. Excited states in both systems form complexes with ground state molecules. These excimers or exciplexes have their characteristics emissive and absorptive properties and may decay to form separated pairs of ground state molecules, cross over to the triplet manifold or emit fluorescence. ESR studies and chemical analyses subsequent to pulse radiolysis were performed in order to estimate bond cleavage probabilities and net reaction rates. The energy deactivation models which were proposed to interpret these data have led to the development of radiation stabilization criteria for these systems
The effects of energetic proton bombardment on polymeric materials: Experimental studies and degradation models
This report describes 3 MeV proton bombardment experiments on several polymeric materials of interest to NASA carried out on the Tandem Van De Graff Accelerator at the California Institute of Technology's Kellogg Radiation Laboratory. Model aromatic and aliphatic polymers such as poly(1-vinyl naphthalene) and poly(methyl methacrylate), as well as polymers for near term space applications such as Kapton, Epoxy and Polysulfone, have been included in this study. Chemical and physical characterization of the damage products have been carried out in order to develop a model of the interaction of these polymers with the incident proton beam. The proton bombardment methodology developed at the Jet Propulsion Laboratory and reported here is part of an ongoing study on the effects of space radiation on polymeric materials. The report is intended to provide an overview of the mechanistic, as well as the technical and experimental, issues involved in such work rather than to serve as an exhaustive description of all the results
The Thermal Environment of the Fiber Glass Dome for the New Solar Telescope at Big Bear Solar Observatory
The New Solar Telescope (NST) is a 1.6-meter off-axis Gregory-type telescope
with an equatorial mount and an open optical support structure. To mitigate the
temperature fluctuations along the exposed optical path, the effects of
local/dome-related seeing have to be minimized. To accomplish this, NST will be
housed in a 5/8-sphere fiberglass dome that is outfitted with 14 active vents
evenly spaced around its perimeter. The 14 vents house louvers that open and
close independently of one another to regulate and direct the passage of air
through the dome. In January 2006, 16 thermal probes were installed throughout
the dome and the temperature distribution was measured. The measurements
confirmed the existence of a strong thermal gradient on the order of 5 degree
Celsius inside the dome. In December 2006, a second set of temperature
measurements were made using different louver configurations. In this study, we
present the results of these measurements along with their integration into the
thermal control system (ThCS) and the overall telescope control system (TCS).Comment: 12 pages, 11 figures, submitted to SPIE Optics+Photonics, San Diego,
U.S.A., 26-30 August 2007, Conference: Solar Physics and Space Weather
Instrumentation II, Proceedings of SPIE Volume 6689, Paper #2
Virtual Supersymmetric Corrections in e^+e^- Annihilation
Depending on their masses, Supersymmetric particles can affect various
measurements in Z decay. Among these are the total width (or consequent
extracted value of ), enhancement or suppression of various flavors,
and left-right and forward-backward asymmetries. The latter depend on squark
mass splittings and are, therefore, a possible test of the Supergravity related
predictions. We calculate leading order corrections for these quantities
considering in particular the case of light photino and gluino where the SUSY
effects are enhanced. In this limit the effect on is appreciable,
the effect on is small, and the effect on the asymmetries is extremely
small.Comment: 11 pages, LaTeX, 3 figures, revised, a reference adde
Testing the Universality of the Stellar IMF with Chandra and HST
The stellar initial mass function (IMF), which is often assumed to be
universal across unresolved stellar populations, has recently been suggested to
be "bottom-heavy" for massive ellipticals. In these galaxies, the prevalence of
gravity-sensitive absorption lines (e.g. Na I and Ca II) in their near-IR
spectra implies an excess of low-mass ( ) stars over that
expected from a canonical IMF observed in low-mass ellipticals. A direct
extrapolation of such a bottom-heavy IMF to high stellar masses (
) would lead to a corresponding deficit of neutron stars and black
holes, and therefore of low-mass X-ray binaries (LMXBs), per unit near-IR
luminosity in these galaxies. Peacock et al. (2014) searched for evidence of
this trend and found that the observed number of LMXBs per unit -band
luminosity () was nearly constant. We extend this work using new and
archival Chandra X-ray Observatory (Chandra) and Hubble Space Telescope (HST)
observations of seven low-mass ellipticals where is expected to be the
largest and compare these data with a variety of IMF models to test which are
consistent with the observed . We reproduce the result of Peacock et al.
(2014), strengthening the constraint that the slope of the IMF at
must be consistent with a Kroupa-like IMF. We construct an IMF model
that is a linear combination of a Milky Way-like IMF and a broken power-law
IMF, with a steep slope ( ) for stars < 0.5 (as
suggested by near-IR indices), and that flattens out ( ) for
stars > 0.5 , and discuss its wider ramifications and limitations.Comment: Accepted for publication in ApJ; 7 pages, 2 figures, 1 tabl
The Woods-Saxon Potential in the Dirac Equation
The two-component approach to the one-dimensional Dirac equation is applied
to the Woods-Saxon potential. The scattering and bound state solutions are
derived and the conditions for a transmission resonance (when the transmission
coefficient is unity) and supercriticality (when the particle bound state is at
E=-m) are then derived. The square potential limit is discussed. The recent
result that a finite-range symmetric potential barrier will have a transmission
resonance of zero-momentum when the corresponding well supports a half-bound
state at E=-m is demonstrated.Comment: 8 pages, 4 figures. Submitted to JPhys
Agronomic responses of corn to stand reducation at vegetative growth stages
Yield loss charts for hail associated with stand reduction assume that remaining plants lose the ability to compensate for lost plants by mid-vegetative growth. Yield losses and stand losses after V8 â leaf collar system â and throughout the remaining vegetative stages are 1:1 according to the current standards.
We conducted field experiments from 2006 to 2009 at twelve site-years in Illinois, Iowa, and Ohio to determine responses of corn to stand reduction at the fifth, eighth, eleventh, and fifteenth leaf collar stages (V5, V8, V11, and V15, respectively). We also wanted to know whether these responses varied between uniform and random patterns of stand reduction with differences in within-row interplant spacing.
When compared to a control of 36,000 plants per acre, grain yield decreased linearly as stand reduction increased from 16.7 to 50% (Table 3), but was not affected by the pattern of stand reduction. This rate of yield loss was greatest when stand reduction occurred at V11 or V15, and least when it occurred at V5. With 50% stand loss, yield was 83 and 69% of the control when stand loss occurred at V5 and V15, respectively. With 16.7% stand loss at V5, V8, or V11, yield averaged 96% of the control. Per-plant grain yield increased when stand loss occurred earlier and was more severe. With 50% stand loss at V11 or V15, per-plant grain yield increased by 37 to 46% compared to the control. Corn retains the ability to compensate for lost plants through the late vegetative stages, indicating that current standards for assessing the effect of stand loss in corn should be reevaluated
Why does the Engel method work? Food demand, economies of size and household survey methods
Estimates of household size economies are needed for the analysis of poverty and inequality. This paper shows that Engel estimates of size economies are large when household expenditures are obtained by respondent recall but small when expenditures are obtained by daily recording in diaries. Expenditure estimates from recall surveys appear to have measurement errors correlated with household size. As well as demonstrating the fragility of Engel estimates of size economies, these results help resolve a puzzle raised by Deaton and Paxson (1998) about differences between rich and poor countries in the effect of household size on food demand
Estimating the Prevalence of Shared Accommodation across the UK from Big Data
This paper introduces a new means of measuring the proportion of shared accommodation within UK neighbourhoods using linked administrative and consumer data. An address-level multi-person household indicator (MHI) was produced for individual years spanning 1997 to 2016. Crucially, this new indicator enables fine-grained spatial analysis of trends in house-sharing outside of decennial census years. This abstract discusses the purpose and derivation methodology of the MHI, before illustrating how it can be used and outlining some future research and policy applications
- âŠ