712 research outputs found

    The economics of feeding bulls and steers fed corn silage and corn grain diets

    Get PDF
    A study comparing weaned spring born bull and steer calves was conducted at the Western Iowa Research Center near Castana over a three year period. The three trials commenced with the placement of cattle on test in November, 1982, October, 1983 and October, 1984. The first trial utilized 80 calves from Simbrah bulls mated to crossbred Charolais, Angus and Hereford cows. The second and third trials consisted of 87 and 90 Angus calves, respectively. Parameters evaluated in this study were total production costs, total feed dry matter consumption, feed cost by period, carcass and live values and total and net returns. The composition of the chuck and estimated percentages of boneless round, loin, rib and chuck were evaluated. Six to eight head of cattle were allotted to 12 pens by sex and weight. All steers were implanted with a growth stimulant at the start of each trial. Two dietary treatments were imposed within sex. One diet consisted of whole plant corn silage and the second consisted of shelled corn grain and whole plant corn silage. All cattle were weighed at 28-day intervals until removed from test when they were 16 to 17 months of age. Average lot final weights were 1150 lb in trial one, 1100 lb in trial two and 1130 lb in trial three. Bulls were found to have a greater total cost of production in trial one (P \u3c.05). The total dry matter consumed per head was significantly lower for bulls than steers in trials two and three (P \u3c.05). Feed costs per pound of gain were lower for bulls than steers during the entire test period of the first trial (P \u3c.002). Net dollar returns were not significantly different for bulls and steers in any of the trials. Bulls in trial three produced chucks with the highest percent of muscle (P \u3c.0004) and the lowest percent of fat (P \u3c.0004). Bulls in trials one and two produced an estimated 60 lb (P \u3c.0001) and 30 lb (P \u3c.006) more retail product, respectively, per carcass than steers fed under like conditions. When trials two and three were combined, bulls produced 27 lb (P \u3c.008) more retail product than steers

    The indecomposable objects in the center of Deligne's category Rep(St)Rep(S_t)

    Full text link
    We classify the indecomposable objects in the monoidal center of Deligne's interpolation category Rep(St)Rep(S_t) by viewing Rep(St)Rep(S_t) as a model-theoretic limit in rank and characteristic. We further prove that the center of Rep(St)Rep(S_t) is semisimple if and only if tt is not a non-negative integer. In addition, we identify the associated graded Grothendieck ring of this monoidal center with that of the graded sum of the centers of representation categories of finite symmetric groups with an induction product. We prove analogous statements for the abelian envelope.Comment: v2: Final accepted manuscript after minor edits, to appear in Proceedings of the LM

    Node coarsening calculi for program slicing

    Get PDF
    Several approaches to reverse and re-engineering are based upon program slicing. Unfortunately, for large systems, such as those which typically form the subject of reverse engineering activities, the space and time requirements of slicing can be a barrier to successful application. Faced with this problem, several authors have found it helpful to merge control flow graph (CFG) nodes, thereby improving the space and time requirements of standard slicing algorithms. The node-merging process essentially creates a 'coarser' version of the original CFG. The paper introduces a theory for defining control flow graph node coarsening calculi. The theory formalizes properties of interest, when coarsening is used as a precursor to program slicing. The theory is illustrated with a case study of a coarsening calculus, which is proved to have the desired properties of sharpness and consistency

    On the computational complexity of dynamic slicing problems for program schemas

    Get PDF
    This is the preprint version of the Article - Copyright @ 2011 Cambridge University PressGiven a program, a quotient can be obtained from it by deleting zero or more statements. The field of program slicing is concerned with computing a quotient of a program that preserves part of the behaviour of the original program. All program slicing algorithms take account of the structural properties of a program, such as control dependence and data dependence, rather than the semantics of its functions and predicates, and thus work, in effect, with program schemas. The dynamic slicing criterion of Korel and Laski requires only that program behaviour is preserved in cases where the original program follows a particular path, and that the slice/quotient follows this path. In this paper we formalise Korel and Laski's definition of a dynamic slice as applied to linear schemas, and also formulate a less restrictive definition in which the path through the original program need not be preserved by the slice. The less restrictive definition has the benefit of leading to smaller slices. For both definitions, we compute complexity bounds for the problems of establishing whether a given slice of a linear schema is a dynamic slice and whether a linear schema has a non-trivial dynamic slice, and prove that the latter problem is NP-hard in both cases. We also give an example to prove that minimal dynamic slices (whether or not they preserve the original path) need not be unique.This work was partly supported by the Engineering and Physical Sciences Research Council, UK, under grant EP/E002919/1

    Electronic Health Record Availability and Anxiety Treatment in Office Based Practices

    Get PDF
    Objective: This study compared the probability of receiving anxiety treatment during a physician visit to primary care practices with and without an electronic health record (EHR). Methods: The 2007–2010 National Ambulatory Medical Care Survey was used to identify visits for anxiety (N=290). The outcome was receipt of anxiety treatment. The independent variable was the presence of a fully functioning EHR. Logistic regression was used to conduct the analysis. Results: Patients who were seen in practices with a fully functioning EHR had lower odds of being offered antianxiety medication (adjusted odds ratio [AOR]=.37, 95% confidence interval [CI]=.15–.90, p=.028), mental health counseling (AOR=.43, CI=.18–1.04, p=.061), and any anxiety treatment (AOR=.40, CI=.15–1.05, p=.062) compared with patients at practices without a fully functioning EHR. Conclusions: EHRs may have a negative impact on the delivery of care for anxiety during primary care visits. Future studies should monitor the impact of EHRs on delivery and quality of care

    Plautus’ Epidicus 1–305: introduction, text, translation, commentary

    Get PDF
    Plautus’ Epidicus focuses on the titular cunning slave, who is on stage for all but 15 of these first 305 lines, and his attempts to rescue himself from certain punishment, after his initial scheme almost collapses with news of a change of heart from his young master. He brings about a dizzying number of deceptions to extricate himself, eventually winning his freedom at the expense of virtually every other character. This commentary aims to recuperate what has all too often been seen as one of Plautus’ minor works, by demonstrating how several alleged incongruities have been misinterpreted, how Epidicus controls and shapes the many plots and plans of this breakneck play, and how the compactness of this drama makes for a unique and compelling comedy. The focus is on the performance and dramatic value of Epidicus, bringing in approaches which have developed since George Duckworth’s 1940 commentary, the last in English. The format enables both line-by-line discussion of the text and approaches to sections and scenes as a whole. Linguistic, metrical and textual discussion are brought to the play anew, building on recent research to not merely explain what the quirks of Plautus’ language, metre and text are, but what it is they do. In all, this thesis aims to fulfil Malcolm Willcock’s desideratum for ‘any general view that this is actually a well conceived, witty and enjoyable play’ – and indeed to provide not merely a general view of its worth, but a detailed and thorough approach to its excellence

    Campus Vol 1 N 3

    Get PDF
    Todd, George. The Valentine . Prose. 2. Fanslow, Ellen. Campus Canines . Prose. 3. Anonymous. Spring Glimpses on Campus . Picture. 4. Anonymous. Casual Corners . Prose. 6. Welch, Vera. Death is Not Sad . Poem. 7. Welch, Vera. I Do Not Love You . Poem. 7. Taylor, Louis. Clouds . Poem. 7. Dancy, Betty Jane. Pray Tell Me M\u27 Lord . Poem. 7. Dancy, Betty Jane. Really Our Friendship is Perfect . Poem. 7. Dancy, Betty Jane. You Say You Love Me For My Faith . Poem. 7. B.Z. The Mountain . Poem. 7. Findeisen, Robert. Do You Want to Be A Doctor? Cartoon. 8. Thomas, John C. Campus Kaleidoscope . Prose. 10. Harman, Betty. Sororities: The Way We See \u27Em . Prose. 12. Marshall, James. Dischargee . Poem. 14. Marshall, James. Suggestions to Dali . Poem. 14. Marshall, James. Sonnet Modern, In G Minor . Poem. 14. Marshall, James. The Request . Poem. 15. Dancy, Betty Jane. My Mother Always Bade Me Beware . Poem. 15. Anonymous. On the Cuff . Prose. 16

    An integrated search-based approach for automatic testing from extended finite state machine (EFSM) models

    Get PDF
    This is the post-print version of the Article - Copyright @ 2011 ElsevierThe extended finite state machine (EFSM) is a modelling approach that has been used to represent a wide range of systems. When testing from an EFSM, it is normal to use a test criterion such as transition coverage. Such test criteria are often expressed in terms of transition paths (TPs) through an EFSM. Despite the popularity of EFSMs, testing from an EFSM is difficult for two main reasons: path feasibility and path input sequence generation. The path feasibility problem concerns generating paths that are feasible whereas the path input sequence generation problem is to find an input sequence that can traverse a feasible path. While search-based approaches have been used in test automation, there has been relatively little work that uses them when testing from an EFSM. In this paper, we propose an integrated search-based approach to automate testing from an EFSM. The approach has two phases, the aim of the first phase being to produce a feasible TP (FTP) while the second phase searches for an input sequence to trigger this TP. The first phase uses a Genetic Algorithm whose fitness function is a TP feasibility metric based on dataflow dependence. The second phase uses a Genetic Algorithm whose fitness function is based on a combination of a branch distance function and approach level. Experimental results using five EFSMs found the first phase to be effective in generating FTPs with a success rate of approximately 96.6%. Furthermore, the proposed input sequence generator could trigger all the generated feasible TPs (success rate = 100%). The results derived from the experiment demonstrate that the proposed approach is effective in automating testing from an EFSM

    A trajectory-based strict semantics for program slicing

    Get PDF
    We define a program semantics that is preserved by dependence-based slicing algorithms. It is a natural extension, to non-terminating programs, of the semantics introduced by Weiser (which only considered terminating ones) and, as such, is an accurate characterisation of the semantic relationship between a program and the slice produced by these algorithms. Unlike other approaches, apart from Weiser’s original one, it is based on strict standard semantics which models the ‘normal’ execution of programs on a von Neumann machine and, thus, has the advantage of being intuitive. This is essential since one of the main applications of slicing is program comprehension. Although our semantics handles non-termination, it is defined wholly in terms of finite trajectories, without having to resort to complex, counter-intuitive, non-standard models of computation. As well as being simpler, unlike other approaches to this problem, our semantics is substitutive. Substitutivity is an important property becauseit greatly enhances the ability to reason about correctness of meaning-preserving program transformations such as slicing
    corecore