34,539 research outputs found
Code coverage of adaptive random testing
Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected
20 cm VLA Radio-Continuum Study of M31 - Images and Point Source Catalogues
We present a series of new high-sensitivity and high-resolution
radio-continuum images of M31 at \lambda=20 cm (\nu=1.4 GHz). These new images
were produced by merging archived 20 cm radio-continuum observations from the
Very Large Array (VLA) telescope. Images presented here are sensitive to rms=60
\mu Jy and feature high angular resolution (<10"). A complete sample of
discrete radio sources have been catalogued and analysed across 17 individual
VLA projects. We identified a total of 864 unique discrete radio sources across
the field of M31. One of the most prominent regions in M31 is the ring feature
for which we estimated total integrated flux of 706 mJy at \lambda=20 cm. We
compare here, detected sources to those listed in Gelfand et al. (2004) at
\lambda=92 cm and find 118 sources in common to both surveys. The majority
(61%) of these sources exhibit a spectral index of \alpha <-0.6 indicating that
their emission is predominantly non-thermal in nature. That is more typical for
background objects.Comment: 28 pages, 25 figures, accepted for publication in the Serbian
Astronomical Journa
Influence of flow confinement on the drag force on a static cylinder
The influence of confinement on the drag force on a static cylinder in a
viscous flow inside a rectangular slit of aperture has been investigated
from experimental measurements and numerical simulations. At low enough
Reynolds numbers, varies linearly with the mean velocity and the viscosity,
allowing for the precise determination of drag coefficients and
corresponding respectively to a mean flow parallel and
perpendicular to the cylinder length . In the parallel configuration, the
variation of with the normalized diameter of the
cylinder is close to that for a 2D flow invariant in the direction of the
cylinder axis and does not diverge when . The variation of
with the distance from the midplane of the model reflects the
parabolic Poiseuille profile between the plates for while it
remains almost constant for . In the perpendicular configuration,
the value of is close to that corresponding to a 2D system
only if and/or if the clearance between the ends of the cylinder
and the side walls is very small: in that latter case,
diverges as due to the blockage of the flow. In other cases, the
side flow between the ends of the cylinder and the side walls plays an
important part to reduce : a full 3D description of the flow is
needed to account for these effects
The Hubble Constant determined through an inverse distance ladder including quasar time delays and Type Ia supernovae
Context. The precise determination of the present-day expansion rate of the
Universe, expressed through the Hubble constant , is one of the most
pressing challenges in modern cosmology. Assuming flat CDM,
inference at high redshift using cosmic-microwave-background data from Planck
disagrees at the 4.4 level with measurements based on the local
distance ladder made up of parallaxes, Cepheids and Type Ia supernovae (SNe
Ia), often referred to as "Hubble tension". Independent,
cosmological-model-insensitive ways to infer are of critical importance.
Aims. We apply an inverse-distance-ladder approach, combining strong-lensing
time-delay-distance measurements with SN Ia data. By themselves, SNe Ia are
merely good relative distance indicators, but by anchoring them to strong
gravitational lenses one can obtain an measurement that is relatively
insensitive to other cosmological parameters. Methods. A cosmological parameter
estimate is performed for different cosmological background models, both for
strong-lensing data alone and for the combined lensing + SNe Ia data sets.
Results. The cosmological-model dependence of strong-lensing measurements
is significantly mitigated through the inverse distance ladder. In combination
with SN Ia data, the inferred consistently lies around 73-74 km s
Mpc, regardless of the assumed cosmological background model. Our
results agree nicely with those from the local distance ladder, but there is a
>2 tension with Planck results, and a ~1.5 discrepancy with
results from an inverse distance ladder including Planck, Baryon Acoustic
Oscillations and SNe Ia. Future strong-lensing distance measurements will
reduce the uncertainties in from our inverse distance ladder.Comment: 5 pages, 3 figures, A&A letters accepted versio
Multi-frequency observations of a superbubble in the LMC: The case of LHA 120-N 70
We present a detailed study of new Australia Telescope Compact Array (ATCA)
and XMM-Newton observations of LHA 120-N 70 (hereafter N 70), a spherically
shaped object in the Large Magellanic Cloud (LMC) classified as a superbubble
(SB). Both archival and new observations were used to produce high quality
radio-continuum, X-ray and optical images. The radio spectral index of N 70 is
estimated to be indicating that while a supernova or
supernovae have occurred in the region at some time in the distant past, N70 is
not the remnant of a single specific supernova. N70 exhibits limited
polarisation with a maximum fractional polarisation of 9% in a small area of
the north west limb. We estimate the size of N 70 to have a diameter of 104 pc
( pc). The morphology of N 70 in X-rays closely follows that in radio
and optical, with most X-ray emission confined within the bright shell seen at
longer wavelengths. Purely thermal models adequately fit the soft X-ray
spectrum which lacks harder emission (above 1 keV). We also examine the
pressure output of N 70 where the values for the hot (PX) and warm (PHii) phase
are consistent with other studied Hii regions. However, the dust-processed
radiation pressure (PIR) is significantly smaller than in any other object
studied in Lopez et al. (2013). N70 is a very complex region that is likely to
have had multiple factors contributing to both the origin and evolution of the
entire region.Comment: 21 pages 8 figures accepted for publication in A
Relativistic Modification of the Gamow Factor
In processes involving Coulomb-type initial- and final-state interactions,
the Gamow factor has been traditionally used to take into account these
additional interactions. The Gamow factor needs to be modified when the
magnitude of the effective coupling constant increases or when the velocity
increases. For the production of a pair of particles under their mutual
Coulomb-type interaction, we obtain the modification of the Gamow factor in
terms of the overlap of the Feynman amplitude with the relativistic wave
function of the two particles. As a first example, we study the modification of
the Gamow factor for the production of two bosons. The modification is
substantial when the coupling constant is large.Comment: 13 pages, in LaTe
Disentangling the Imaginary-Time Formalism at Finite Temperature
We rewrite the imaginary-time formalism of finite temperature field theory in
a form that all graphs used in calculating physical processes do not have any
loops. Any production of a particle from a heat bath which is itself not
thermalized or the decay and absorption of a similar particle in the bath is
expressed entirely in terms of the sum of particle interaction processes. These
are themselves very general in meaning. They can be straight forward
interactions or the more subtle and less well-known purely interference
processes that do not have a counter part in the vacuum.Comment: 14 pages revtex style, 20 embedded EPS figures, added discussion of
the connection with the real-time formalism + reference
- …