664 research outputs found
Using New Selection Tools
The goal of most beef production systems is to increase or at least maintain profitability. Producers can attempt to increase profitability in a variety of ways that might include reducing feed costs, changing their marketing program, or perhaps by changing the performance of their herd through genetic improvement. Focusing on this latter option, there are two primary genetic tools available: selection and mating where selection refers to the selection of breeding animals and mating includes which females are mated to which bulls, for example, crossbreeding systems. This paper focuses on the former, the selection of the appropriate animals for a production system with the goal to improve profitability. The best tool available for making selection decisions is expected progeny differences (EPD). Over the years the number of EPD available to guide producers in making selection decisions has grown from 5 to over 15 in most cases. Simply put, the amount of information that the breeder must sift through to try to make a good selection decision has become overwhelming. The producer must determine which EPD have the greatest influence on their income and their expenses, and by how much—a daunting task. Historically this task has depended on the “intuition” and experience of the breeder. For instance, they know that selection for heavier weaning weight will increase the weight of calves sold at weaning, but that blind selection for weaning weight will also increase calving difficulty and if replacements are kept, likely increase cow size and feed costs. Breeders have been performing a balancing act with little concrete information on how important each of those traits is to their profitability. Fortunately, there are several tools that have recently become available to ease the process of combining the costs and the revenues of beef production with EPD to make selection decisions that will produce progeny which are more profitable
Recommended from our members
CT imaging techniques for two-phase and three-phase in-situ saturation measurements
The aim of this research is to use the SUPRI 3D steam injection laboratory model to establish a reliable method for 3-phase in-situ saturation measurements, and thereafter investigate the mechanism of steamflood at residual oil saturation. Demiral et al. designed and constructed a three dimensional laboratory model that can be used to measure temperature, pressure and heat loss data. The model is also designed so that its construction materials are not a limiting factor for CT scanning. We have used this model for our study. In this study, we saturated the model with mineral oil, and carried out waterflood until residual oil saturation. Steamflood was then carried out. A leak appeared at the bottom of the model. Despite this problem, the saturation results, obtained by using 2-phase and 3-phase saturation equations and obtained from the Cat scanner, were compared with the saturations obtained from material balance. The errors thus obtained were compared with those obtained by an error analysis carried out on the saturation equations. This report gives details of the experimental procedures, the data acquisition and data processing computer programs, and the analysis of a steamflood experiment carried out at residual oil saturation
Optimized energy calculation in lattice systems with long-range interactions
We discuss an efficient approach to the calculation of the internal energy in
numerical simulations of spin systems with long-range interactions. Although,
since the introduction of the Luijten-Bl\"ote algorithm, Monte Carlo
simulations of these systems no longer pose a fundamental problem, the energy
calculation is still an O(N^2) problem for systems of size N. We show how this
can be reduced to an O(N logN) problem, with a break-even point that is already
reached for very small systems. This allows the study of a variety of, until
now hardly accessible, physical aspects of these systems. In particular, we
combine the optimized energy calculation with histogram interpolation methods
to investigate the specific heat of the Ising model and the first-order regime
of the three-state Potts model with long-range interactions.Comment: 10 pages, including 8 EPS figures. To appear in Phys. Rev. E. Also
available as PDF file at
http://www.cond-mat.physik.uni-mainz.de/~luijten/erikpubs.htm
On the Metric Dimension of Cartesian Products of Graphs
A set S of vertices in a graph G resolves G if every vertex is uniquely
determined by its vector of distances to the vertices in S. The metric
dimension of G is the minimum cardinality of a resolving set of G. This paper
studies the metric dimension of cartesian products G*H. We prove that the
metric dimension of G*G is tied in a strong sense to the minimum order of a
so-called doubly resolving set in G. Using bounds on the order of doubly
resolving sets, we establish bounds on G*H for many examples of G and H. One of
our main results is a family of graphs G with bounded metric dimension for
which the metric dimension of G*G is unbounded
Hierarchical search strategy for the detection of gravitational waves from coalescing binaries: Extension to post-Newtonian wave forms
The detection of gravitational waves from coalescing compact binaries would
be a computationally intensive process if a single bank of template wave forms
(i.e., a one step search) is used. In an earlier paper we had presented a
detection strategy, called a two step search}, that utilizes a hierarchy of
template banks. It was shown that in the simple case of a family of Newtonian
signals, an on-line two step search was about 8 times faster than an on-line
one step search (for initial LIGO). In this paper we extend the two step search
to the more realistic case of zero spin 1.5 post-Newtonian wave forms. We also
present formulas for detection and false alarm probabilities which take
statistical correlations into account. We find that for the case of a 1.5
post-Newtonian family of templates and signals, an on-line two step search
requires about 1/21 the computing power that would be required for the
corresponding on-line one step search. This reduction is achieved when signals
having strength S = 10.34 are required to be detected with a probability of
0.95, at an average of one false event per year, and the noise power spectral
density used is that of advanced LIGO. For initial LIGO, the reduction achieved
in computing power is about 1/27 for S = 9.98 and the same probabilities for
detection and false alarm as above.Comment: 30 page RevTeX file and 17 figures (postscript). Submitted to PRD Feb
21, 199
Toward an Unsteady Aerodynamic ROM for Multiple Mach Regimes
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/97065/1/AIAA2012-1708.pd
Errors in recall of age at first sex
Aims: To measure the degree and direction of errors in recall of age at first sex. Method: Participants were initially recruited in 1994–1995 (Wave I) with 3 subsequent follow-ups in: 1996 (Wave II); 2001– 2002 (Wave III); and 2007–2008 (Wave IV). Participants' individual errors in recall of their age at first sex at Wave IV were estimated by the paired difference between responses given for age at first sex in Wave I and Wave IV (recalled age at first sex obtained at Wave IV minus the age at first sex obtained at Wave I). Results: The mean of the recall-estimation of age at first sex at Wave IV was found to be slightly increased comparing to the age at first sex at Wave I (less than 1 year). The errors in the recalled age at first sex tended to increase in participants who had their first sex younger or older than the average, and the recalled age at first sex tended to bias towards the mean (i.e. participants who had first sex younger than the average were more likely to recall an age at first sex that was older than the age, and vice versa). Conclusions: In this U.S. population-based sample, the average recall error for age at first sex was small. However, the accuracy of recalled information varied significantly among subgroup populations
On the Tractability of (k, i)-Coloring
In an undirected graph, a proper (
k, i
)-coloring is an assign-
ment of a set of
k
colors to each vertex such that any two adjacent
vertices have at most
i
common colors. The (
k, i
)-coloring problem is
to compute the minimum number of colors required for a proper
(
k, i
)-
coloring. This is a generalization of the classic graph colo
ring problem.
Majumdar et. al. [CALDAM 2017] studied this problem and show
ed
that the decision version of the (
k, i
)-coloring problem is fixed parameter
tractable (FPT) with tree-width as the parameter. They aske
d if there
exists an FPT algorithm with the size of the feedback vertex s
et (FVS)
as the parameter without using tree-width machinery. We ans
wer this in
positive by giving a parameterized algorithm with the size o
f the FVS
as the parameter. We also give a faster and simpler exact algo
rithm for
(
k, k
−
1)-coloring, and make progress on the NP-completeness of sp
ecific
cases of (
k, i
)-colorin
Schooling and Poor Children in 19th-Century America
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/68138/2/10.1177_000276429203500307.pd
Direct Gene Transfer for the Understanding and Treatment of Human Disease
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75762/1/j.1749-6632.1994.tb21709.x.pd
- …