2,216 research outputs found

    A Comparative Study of Learning Curve Models and Factors in Defense Cost Estimating Based on Program Integration, Assembly, and Checkout

    Get PDF
    The purpose of this research was to investigate the flattening effect at tail end of learning curves by identifying a more accurate learning curve model. The learning curve models accepted by DOD are Wright’s original learning curve theory and Crawford’s Unit Theory. The models were formulated in 1936 and 1944 respectively. This analysis compares the conventional models to contemporary learning curve models in order to determine if the current DOD methodology is outdated. The results are inconclusive as to if there is a more accurate model. The contemporary models are the DeJong and S-Curve and they both include an incompressibility factor, which is the percentage of the process that includes automation. Including models that incorporate automation was important as technology and machinery plays a larger role in production. Wright’s model appears to be most accurate unless incompressibility is very low. A trend for all models appeared. The trend is Wright’s curve was accurate early in production and the contemporary models were more accurate later in production. Future research should have an objective of finding a heuristic for when the models are most accurate or comparative studies including more models

    The Impact of Learning Curve Model Selection and Criteria for Cost Estimation Accuracy in the DoD

    Get PDF
    The first part of this manuscript examines the impact of configuration changes to the learning curve when implemented during production. This research is a study on the impact to the learning curve slope when production is continuous but a configuration change occurs. Analysis discovered the learning curve slope after a configuration change is different from the stable learning curve slope pre-configuration change. The newly configured units were statistically different from previous units. This supports that the new configuration should be estimated with a new learning curve equation. The research also discovered the post-configuration slope is always steeper than the stable learning slope. Secondly, this research investigates flattening effects at tail of production. Analysis compares the conventional and contemporary learning curve models in order to determine if there is a more accurate learning model. Results in this are inconclusive. Examining models that incorporate automation was important, as technology and machinery play a larger role in production. Conventional models appear to be most accurate, although a trend for all models appeared. The trend supports that the conventional curve was accurate early in production and the contemporary models were more accurate later in production

    Polarization Requirements for Ensemble Implementations of Quantum Algorithms with a Single Bit Output

    Full text link
    We compare the failure probabilities of ensemble implementations of quantum algorithms which use pseudo-pure initial states, quantified by their polarization, to those of competing classical probabilistic algorithms. Specifically we consider a class algorithms which require only one bit to output the solution to problems. For large ensemble sizes, we present a general scheme to determine a critical polarization beneath which the quantum algorithm fails with greater probability than its classical competitor. We apply this to the Deutsch-Jozsa algorithm and show that the critical polarization is 86.6%.Comment: 11 pages, 3 figure

    Examining the dynamics of decision making when designing curriculum in partnership with students: How should we proceed?

    Get PDF
    [This paper is part of the Focused Collection on Curriculum Development: Theory into Design.] Common models of curricular development in physics education research (PER) have typically involved a hierarchical relationship between researchers and students, where researchers lead the design and testing of curriculum for students. We draw from work in students as partners and related fields in order to codesign curriculum in partnership with students. Such work has the potential to disrupt typical hierarchical relationships and interactions between students and faculty by involving students in the process of making curricular decisions. We invited undergraduate students to participate in a partnership to codesign a set of curricular materials for topics in quantum mechanics that students often struggle with. Four undergraduate students, one PER graduate student, and one PER faculty member met for a series of codesign meetings. We collected videotapes of the meetings, written artifacts, and meeting reflections. This paper presents a fine-grained analysis of one interaction in which researchers attempted to create space for students to contribute to decision making about how the collaboration should proceed. Through analyzing the complex dynamics of how participants negotiated decision-making space, including characterizing the types of decisions that were made, we describe how access to those decisions were opened up or cut off, and how those decisions contested or reaffirmed participants\u27 roles. Working towards partnership is a complex and messy process: attempts to open up space for some forms of decision making closed off access to other forms of decision making. In some ways, the interactions between the participants also reified the traditional student and faculty roles that the partnership had intended to disrupt. Through closely analyzing these dynamics, we aim to self-critically reflect on the challenges and tensions that emerge in codesign partnerships. We discuss our own areas for growth and speak to implications for more responsible partnerships

    Climatic effects of the Chicxulub impact ejecta

    Get PDF
    Examining the short and long term effects of the Chicxulub impact is critical for understanding how life developed on Earth. While the aftermath of the initial impact would have produced harmful levels of radiation sufficient for eradicating a large portion of terrestrial life, this process does not explain the concurrent marine extinction. Following the primary impact, a large quantity of smaller spherules would de-orbit and re-enter the earths atmosphere, dispersed nearly uniformly across the planet. This secondary wave of debris would re-enter at high velocities, altering the chemical composition of the atmosphere. Furthermore, the combined surface area for the spherules would be much larger than for the original asteroid, resulting in considerably more potential reactions. For this reason, a new method was developed for predicting the total amount of toxic species produced by the spherule re-entry phase of the Chicxulub event. Using non-equilibrium properties obtained from direct simulation Monte Carlo (DSMC) methods coupled with spherule trajectory integration, the most likely cause of the observed marine extinction was determined

    Effects of increasing soybean hulls in finishing diets with wet or modified distillers grains plus solubles on performance and carcass characteristics of beef steers

    Get PDF
    Two experiments evaluated feeding soybean hulls (SBH) in finishing diets that contain distillers grains plus solubles on performance and carcass characteristics. Dietary concentrations of SBH were 0, 12.5, 25, and 37.5% of diet DM. In Exp. 1, 167 crossbred yearling steers (395 ± 22 kg of BW) were fed for 117 d in a randomized block design in which pelleted SBH replaced dry-rolled corn. All diets contained 25% modified distillers grains plus solubles, 15% corn silage, and 5% liquid supplement. As SBH concentration increased, DMI decreased linearly (P = 0.04). Gain and G:F decreased linearly (P \u3c 0.01) in response to increasing concentrations of SBH, which decreased relative energy value from 91 to 79% of corn. Hot carcass weight linearly decreased (P \u3c 0.01) by 24 kg as SBH increased. In Exp. 2, a randomized block design used 160 backgrounded steer calves (363 ± 16 kg of BW) in a 138-d finishing study with 0, 12.5, 25, or 37.5% SBH in the meal form. Basal ingredients consisted of a 1:1 ratio of high-moisture corn and dry-rolled corn, 40% wet distillers grains plus solubles, 8% sorghum silage, and 4% dry meal supplement. There was a tendency (P = 0.12) for a quadratic increase in ADG and G:F as dietary SBH increased, with numerically greatest ADG and G:F with 12.5% SBH. Feeding 12.5 to 25% SBH with 40% wet distillers grains plus solubles (Exp. 2) had little effect on performance but decreased ADG and G:F in diets with 25% modified distillers grains plus solubles (Exp. 1)

    The uncertainties on the EFT coupling limits for direct dark matter detection experiments stemming from uncertainties of target properties

    Full text link
    Direct detection experiments are still one of the most promising ways to unravel the nature of dark matter. To fully understand how well these experiments constrain the dark matter interactions with the Standard Model particles, all the uncertainties affecting the calculations must be known. It is especially critical now because direct detection experiments recently moved from placing limits only on the two elementary spin independent and spin dependent operators to the complete set of possible operators coupling dark matter and nuclei in non-relativistic theory. In our work, we estimate the effect of nuclear configuration-interaction uncertainties on the exclusion bounds for one of the existing xenon-based experiments for all fifteen operators. We find that for operator number 13 the ±1σ\pm1\sigma uncertainty on the coupling between the dark matter and nucleon can reach more than 50% for dark matter masses between 10 and 1000 GeV. In addition, we discuss how quantum computers can help to reduce this uncertainty.Comment: 12 pages, 6 figures; submitted to Phys. Rev. D, May 17, 202
    • …
    corecore