9,300 research outputs found
Changes in the measured image separation of the gravitational lens system, PKS 1830-211
We present eight epochs of 43 GHz, dual-polarisation VLBA observations of the
gravitational lens system PKS 1830-211, made over fourteen weeks. A bright,
compact ``core'' and a faint extended ``jet'' are clearly seen in maps of both
lensed images at all eight epochs.
The relative separation of the radio centroid of the cores (as measured on
the sky) changes by up to 87 micro arcsec between subsequent epochs.
A comparison with the previous 43 GHz VLBA observations (Garrett et al. 1997)
made 8 months earlier show even larger deviations in the separation of up to
201 micro arcsec. The measured changes are most likely produced by changes in
the brightness distribution of the background source, enhanced by the
magnification of the lens. A relative magnification matrix that is applicable
on the milliarcsecond scale has been determined by relating two vectors (the
``core-jet'' separations and the offsets of the polarised and total intensity
emission) in the two lensed images. The determinant of this matrix,
-1.13 +/-0.61, is in good agreement with the measured flux density ratio of
the two images. The matrix predicts that the 10 mas long jet, that is clearly
seen in previous 15 and 8.4 GHz VLBA observations (Garrett et al. 1997,
Guirado et al. 1999), should correspond to a 4 mas long jet trailing to the
south-east of the SW image. The clear non-detection of this trailing jet is a
strong evidence for sub-structure in the lens and may require more realistic
lens models to be invoked, e.g. Nair & Garrett (2000).Comment: 8 pages, 5 figure
Hamiltonian formalism and the Garrett-Munk spectrum of internal waves in the ocean
Wave turbulence formalism for long internal waves in a stratified fluid is
developed, based on a natural Hamiltonian description. A kinetic equation
appropriate for the description of spectral energy transfer is derived, and its
self-similar stationary solution corresponding to a direct cascade of energy
toward the short scales is found. This solution is very close to the high
wavenumber limit of the Garrett-Munk spectrum of long internal waves in the
ocean. In fact, a small modification of the Garrett-Munk formalism includes a
spectrum consistent with the one predicted by wave turbulence.Comment: 4 pages latex fil
Deep carbon storage potential of buried floodplain soils.
Soils account for the largest terrestrial pool of carbon and have the potential for even greater quantities of carbon sequestration. Typical soil carbon (C) stocks used in global carbon models only account for the upper 1 meter of soil. Previously unaccounted for deep carbon pools (>1 m) were generally considered to provide a negligible input to total C contents and represent less dynamic C pools. Here we assess deep soil C pools associated with an alluvial floodplain ecosystem transitioning from agricultural production to restoration of native vegetation. We analyzed the soil organic carbon (SOC) concentrations of 87 surface soil samples (0-15 cm) and 23 subsurface boreholes (0-3 m). We evaluated the quantitative importance of the burial process in the sequestration of subsurface C and found our subsurface soils (0-3 m) contained considerably more C than typical C stocks of 0-1 m. This deep unaccounted soil C could have considerable implications for global C accounting. We compared differences in surface soil C related to vegetation and land use history and determined that flooding restoration could promote greater C accumulation in surface soils. We conclude deep floodplain soils may store substantial quantities of C and floodplain restoration should promote active C sequestration
New Modeling of the Lensing Galaxy and Cluster of Q0957+561: Implications for the Global Value of the Hubble Constant
The gravitational lens 0957+561 is modeled utilizing recent observations of
the galaxy and the cluster as well as previous VLBI radio data which have been
re-analyzed recently. The galaxy is modeled by a power-law elliptical mass
density with a small core while the cluster is modeled by a non-singular
power-law sphere as indicated by recent observations. Using all of the current
available data, the best-fit model has a reduced chi-squared of approximately 6
where the chi-squared value is dominated by a small portion of the
observational constraints used; this value of the reduced chi-squared is
similar to that of the recent FGSE best-fit model by Barkana et al. However,
the derived value of the Hubble constant is significantly different from the
value derived from the FGSE model. We find that the value of the Hubble
constant is given by H_0 = 69 +18/-12 (1-K) and 74 +18/-17 (1-K) km/s/Mpc with
and without a constraint on the cluster's mass, respectively, where K is the
convergence of the cluster at the position of the galaxy and the range for each
value is defined by Delta chi-squared = reduced chi-squared. Presently, the
best achievable fit for this system is not as good as for PG 1115+080, which
also has recently been used to constrain the Hubble constant, and the
degeneracy is large. Possibilities for improving the fit and reducing the
degeneracy are discussed.Comment: 22 pages in aaspp style including 6 tables and 5 figures, ApJ in
press (Nov. 1st issue
PREDICTIVE TIME MODEL OF AN ANGLIA AUTOFLOW MECHANICAL CHICKEN CATCHING SYSTEM
In this project, a predictive time model was developed for an Anglia Autoflow mechanical chicken catching system. At the completion of poultry growout, hand labor is currently used to collect the birds from the house, although some integrators are beginning to incorporate mechanical catching equipment. Several regression models were investigated with the objective of predicting the time taken to catch the chicken. A regression model relating distance to total time (sum of packing time, catching time, movement to catching and movement to packing) provided the best performance. The model was based on data collected from poultry farms on the Delmarva Peninsula during a six-month period. Statistical Analysis System (SAS) and NeuroShell Easy Predictor were used to build the regression and neural network models respectively. Model adequacy was established by both visual inspection and statistical techniques. The models were validated with experimental results not incorporated into the initial model.Livestock Production/Industries,
Values of H_0 from Models of the Gravitational Lens 0957+561
The lensed double QSO 0957+561 has a well-measured time delay and hence is
useful for a global determination of H0. Uncertainty in the mass distribution
of the lens is the largest source of uncertainty in the derived H0. We
investigate the range of \hn produced by a set of lens models intended to mimic
the full range of astrophysically plausible mass distributions, using as
constraints the numerous multiply-imaged sources which have been detected. We
obtain the first adequate fit to all the observations, but only if we include
effects from the galaxy cluster beyond a constant local magnification and
shear. Both the lens galaxy and the surrounding cluster must depart from
circular symmetry as well.
Lens models which are consistent with observations to 95% CL indicate
H0=104^{+31}_{-23}(1-\kthirty) km/s/Mpc. Previous weak lensing measurements
constrain the mean mass density within 30" of G1 to be kthirty=0.26+/-0.16 (95%
CL), implying H0=77^{+29}_{-24}km/s/Mpc (95% CL). The best-fitting models span
the range 65--80 km/s/Mpc. Further observations will shrink the confidence
interval for both the mass model and \kthirty.
The range of H0 allowed by the full gamut of our lens models is substantially
larger than that implied by limiting consideration to simple power law density
profiles. We therefore caution against use of simple isothermal or power-law
mass models in the derivation of H0 from other time-delay systems. High-S/N
imaging of multiple or extended lensed features will greatly reduce the H0
uncertainties when fitting complex models to time-delay lenses.Comment: AASTEX, 48 pages 4 figures, 2 tables. Also available at:
http://www.astro.lsa.umich.edu:80/users/philf/www/papers/list.htm
A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software
Context: Today's safety critical systems are increasingly reliant on
software. Software becomes responsible for most of the critical functions of
systems. Many different safety analysis techniques have been developed to
identify hazards of systems. FTA and FMEA are most commonly used by safety
analysts. Recently, STPA has been proposed with the goal to better cope with
complex systems including software. Objective: This research aimed at comparing
quantitatively these three safety analysis techniques with regard to their
effectiveness, applicability, understandability, ease of use and efficiency in
identifying software safety requirements at the system level. Method: We
conducted a controlled experiment with 21 master and bachelor students applying
these three techniques to three safety-critical systems: train door control,
anti-lock braking and traffic collision and avoidance. Results: The results
showed that there is no statistically significant difference between these
techniques in terms of applicability, understandability and ease of use, but a
significant difference in terms of effectiveness and efficiency is obtained.
Conclusion: We conclude that STPA seems to be an effective method to identify
software safety requirements at the system level. In particular, STPA addresses
more different software safety requirements than the traditional techniques FTA
and FMEA, but STPA needs more time to carry out by safety analysts with little
or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International
Conference on Evaluation and Assessment in Software Engineering (EASE '15).
ACM, 201
The development of an engineering computer graphics laboratory
Hardware and software systems developed to further research and education in interactive computer graphics were described, as well as several of the ongoing application-oriented projects, educational graphics programs, and graduate research projects. The software system consists of a FORTRAN 4 subroutine package, in conjunction with a PDP 11/40 minicomputer as the primary computation processor and the Imlac PDS-1 as an intelligent display processor. The package comprises a comprehensive set of graphics routines for dynamic, structured two-dimensional display manipulation, and numerous routines to handle a variety of input devices at the Imlac
- …