4,433 research outputs found
Improved timed-mating, non-invasive method using fewer unproven female rats with pregnancy validation via early body mass increases
For studies requiring accurate conception-timing, reliable, efficient methods of detecting oestrus reduce time and costs, whilst improving welfare. Standard methods use vaginal cytology to stage cycle, and breeders are paired–up using approximately five proven females with proven males to achieve at least one conception on a specific day. We describe an alternative, fast, consistent, non-invasive method of timed-mating using detection of lordosis behaviour in Wistar and Lister-Hooded rats that used unproven females with high success rates. Rats under reverse-lighting had body masses recorded pre-mating, day (d) 3-4, d8, d10 and d18 of pregnancy. Using only the presence of the oestrus dance to time-mate females for 24-hrs, 89% Wistar and 88% Lister-Hooded rats successfully conceived. We did not observe behavioural oestrus in Sprague-Dawleys without males present. Significant body mass increases following mating distinguished pregnant from non-pregnant rats, as early as d4 of pregnancy (10% ± 1.0 increase cf 3% ± 1.2). The pattern of increases throughout gestation was similar for all pregnant rats until late pregnancy, when there were smaller increases for primi- and multiparous rats (32% ± 2.5; 25% ± 2.4), whereas nulliparous rats had highest gains (38% ± 1.5). This method demonstrated a distinct refinement of the previous timed-mating common practice used, as disturbance of females was minimised. Only the number required of nulli-, primi- or multiparous rats were mated, and body mass increases validated pregnancy status. This new breeding-management method is now established practice for two strains of rat and resulted in a reduction in animal use
Determination of Pericardial Adipose Tissue Increases the Prognostic Accuracy of Coronary Artery Calcification for Future Cardiovascular Events
Objectives: Pericardial adipose tissue (PAT) is associated with coronary artery plaque accumulation and the incidence of coronary heart disease. We evaluated the possible incremental prognostic value of PAT for future cardiovascular events. Methods: 145 patients (94 males, age 60 10 years) with stable coronary artery disease underwent coronary artery calcification (CAC) scanning in a multislice CT scanner, and the volume of pericardial fat was measured. Mean observation time was 5.4 years. Results: 34 patients experienced a severe cardiac event. They had a significantly higher CAC score (1,708 +/- 2,269 vs. 538 +/- 1,150, p 400, 3.5 (1.9-5.4; p = 0.007) for scores > 800 and 5.9 (3.7-7.8; p = 0.005) for scores > 1,600. When additionally a PAT volume > 200 cm(3) was determined, there was a significant increase in the event rate and relative risk. We calculated a relative risk of 2.9 (1.9-4.2; p = 0.01) for scores > 400, 4.0 (2.1-5.0; p = 0.006) for scores > 800 and 7.1 (4.1-10.2; p = 0.005) for scores > 1,600. Conclusions:The additional determination of PAT increases the predictive power of CAC for future cardiovascular events. PAT might therefore be used as a further parameter for risk stratification. Copyright (C) 2012 S. Karger AG, Base
Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
We present a new measurement of the kinematic Sunyaev-Zeldovich effect using
data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation
Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area,
we evaluate the mean pairwise baryon momentum associated with the positions of
50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A
non-zero signal arises from the large-scale motions of halos containing the
sample galaxies. The data fits an analytical signal model well, with the
optical depth to microwave photon scattering as a free parameter determining
the overall signal amplitude. We estimate the covariance matrix of the mean
pairwise momentum as a function of galaxy separation, using microwave sky
simulations, jackknife evaluation, and bootstrap estimates. The most
conservative simulation-based errors give signal-to-noise estimates between 3.6
and 4.1 for varying galaxy luminosity cuts. We discuss how the other error
determinations can lead to higher signal-to-noise values, and consider the
impact of several possible systematic errors. Estimates of the optical depth
from the average thermal Sunyaev-Zeldovich signal at the sample galaxy
positions are broadly consistent with those obtained from the mean pairwise
momentum signal.Comment: 15 pages, 8 figures, 2 table
Recommended from our members
Time-Integrated Neutrino Source Searches with 10 Years of IceCube Data.
This Letter presents the results from pointlike neutrino source searches using ten years of IceCube data collected between April 6, 2008 and July 10, 2018. We evaluate the significance of an astrophysical signal from a pointlike source looking for an excess of clustered neutrino events with energies typically above ∼1 TeV among the background of atmospheric muons and neutrinos. We perform a full-sky scan, a search within a selected source catalog, a catalog population study, and three stacked Galactic catalog searches. The most significant point in the northern hemisphere from scanning the sky is coincident with the Seyfert II galaxy NGC 1068, which was included in the source catalog search. The excess at the coordinates of NGC 1068 is inconsistent with background expectations at the level of 2.9σ after accounting for statistical trials from the entire catalog. The combination of this result along with excesses observed at the coordinates of three other sources, including TXS 0506+056, suggests that, collectively, correlations with sources in the northern catalog are inconsistent with background at 3.3σ significance. The southern catalog is consistent with background. These results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector
Recommended from our members
Combined sensitivity to the neutrino mass ordering with JUNO, the IceCube Upgrade, and PINGU
The ordering of the neutrino mass eigenstates is one of the fundamental open questions in neutrino physics. While current-generation neutrino oscillation experiments are able to produce moderate indications on this ordering, upcoming experiments of the next generation aim to provide conclusive evidence. In this paper we study the combined performance of the two future multi-purpose neutrino oscillation experiments JUNO and the IceCube Upgrade, which employ two very distinct and complementary routes toward the neutrino mass ordering. The approach pursued by the 20 kt medium-baseline reactor neutrino experiment JUNO consists of a careful investigation of the energy spectrum of oscillated νe produced by ten nuclear reactor cores. The IceCube Upgrade, on the other hand, which consists of seven additional densely instrumented strings deployed in the center of IceCube DeepCore, will observe large numbers of atmospheric neutrinos that have undergone oscillations affected by Earth matter. In a joint fit with both approaches, tension occurs between their preferred mass-squared differences Δm312=m32-m12 within the wrong mass ordering. In the case of JUNO and the IceCube Upgrade, this allows to exclude the wrong ordering at >5σ on a timescale of 3-7 years - even under circumstances that are unfavorable to the experiments' individual sensitivities. For PINGU, a 26-string detector array designed as a potential low-energy extension to IceCube, the inverted ordering could be excluded within 1.5 years (3 years for the normal ordering) in a joint analysis
A process pattern model for tackling and improving big data quality
Data seldom create value by themselves. They need to be linked and combined from multiple sources, which can often come with variable data quality. The task of improving data quality is a recurring challenge. In this paper, we use a case study of a large telecom company to develop a generic process pattern model for improving data quality. The process pattern model is defined as a proven series of activities, aimed at improving the data quality given a certain context, a particular objective, and a specific set of initial conditions. Four different patterns are derived to deal with the variations in data quality of datasets. Instead of having to find the way to improve the quality of big data for each situation, the process model provides data users with generic patterns, which can be used as a reference model to improve big data quality
Recommended from our members
Efficient propagation of systematic uncertainties from calibration to analysis with the SnowStorm method in IceCube
Efficient treatment of systematic uncertainties that depend on a large number of nuisance parameters is a persistent difficulty in particle physics and astrophysics experiments. Where low-level effects are not amenable to simple parameterization or re-weighting, analyses often rely on discrete simulation sets to quantify the effects of nuisance parameters on key analysis observables. Such methods may become computationally untenable for analyses requiring high statistics Monte Carlo with a large number of nuisance degrees of freedom, especially in cases where these degrees of freedom parameterize the shape of a continuous distribution. In this paper we present a method for treating systematic uncertainties in a computationally efficient and comprehensive manner using a single simulation set with multiple and continuously varied nuisance parameters. This method is demonstrated for the case of the depth-dependent effective dust distribution within the IceCube Neutrino Telescope
Thermodynamics of deformed AdS model with a positive/negative quadratic correction in graviton-dilaton system
By solving the Einstein equations of the graviton coupling with a real scalar
dilaton field, we establish a general framework to self-consistently solve the
geometric background with black-hole for any given phenomenological holographic
models. In this framwork, we solve the black-hole background, the corresponding
dilaon field and the dilaton potential for the deformed AdS model with a
positive/negative quadratic correction. We systematically investigate the
thermodynamical properties of the deformed AdS model with a positive and
negative quadratic correction, respectively, and compare with lattice QCD on
the results of the equation of state, the heavy quark potential, the Polyakov
loop and the spatial Wilson loop. We find that the bulk thermodynamical
properties are not sensitive to the sign of the quadratic correction, and the
results of both deformed holographic QCD models agree well with lattice QCD
result for pure SU(3) gauge theory. However, the results from loop operators
favor a positive quadratic correction, which agree well with lattice QCD
result. Especially, the result from the Polyakov loop excludes the model with a
negative quadratic correction in the warp factor of .Comment: 26 figures,36 pages,V.3: an appendix,more equations and references
added,figures corrected,published versio
Recommended from our members
Design and performance of the first IceAct demonstrator at the South Pole
In this paper we describe the first results of IceAct, a compact imaging air-Cherenkov telescope operating in coincidence with the IceCube Neutrino Observatory (IceCube) at the geographic South Pole. An array of IceAct telescopes (referred to as the IceAct project) is under consideration as part of the IceCube-Gen2 extension to IceCube. Surface detectors in general will be a powerful tool in IceCube-Gen2 for distinguishing astrophysical neutrinos from the dominant backgrounds of cosmic-ray induced atmospheric muons and neutrinos: the IceTop array is already in place as part of IceCube, but has a high energy threshold. Although the duty cycle will be lower for the IceAct telescopes than the present IceTop tanks, the IceAct telescopes may prove to be more effective at lowering the detection threshold for air showers. Additionally, small imaging air-Cherenkov telescopes in combination with IceTop, the deep IceCube detector or other future detector systems might improve measurements of the composition of the cosmic ray energy spectrum. In this paper we present measurements of a first 7-pixel imaging air Cherenkov telescope demonstrator, proving the capability of this technology to measure air showers at the South Pole in coincidence with IceTop and the deep IceCube detector
Recommended from our members
Search for sources of astrophysical neutrinos using seven years of icecube cascade events
Low-background searches for astrophysical neutrino sources anywhere in the sky can be performed using cascade events induced by neutrinos of all flavors interacting in IceCube with energies as low as ∼1 TeV. Previously we showed that, even with just two years of data, the resulting sensitivity to sources in the southern sky is competitive with IceCube and ANTARES analyses using muon tracks induced by charge current muon neutrino interactions - especially if the neutrino emission follows a soft energy spectrum or originates from an extended angular region. Here, we extend that work by adding five more years of data, significantly improving the cascade angular resolution, and including tests for point-like or diffuse Galactic emission to which this data set is particularly well suited. For many of the signal candidates considered, this analysis is the most sensitive of any experiment to date. No significant clustering was observed, and thus many of the resulting constraints are the most stringent to date. In this paper we will describe the improvements introduced in this analysis and discuss our results in the context of other recent work in neutrino astronomy
- …
