3,741 research outputs found

    Development of Experimental and Finite Element Models to Show Size Effects in the Forming of Thin Sheet Metals

    Get PDF
    Abstract An experimental method was developed that demonstrated the size effects in forming thin sheet metals, and a finite element model was developed to predict the effects demonstrated by the experiment. A universal testing machine (UTM) was used to form aluminum and copper of varying thicknesses (less than 1mm) into a hemispherical dome. A stereolithography additive manufacturing technology was used to fabricate the punch and die from a UV curing resin. There was agreement between the experimental and numerical models. The results showed that geometric size effects were significant for both materials, and these effects increased as the thickness of the sheets decreased. The demonstration presents an inexpensive method of testing small-scale size effects in forming processes, which can be altered easily to produce different shapes and clearances

    Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification

    Get PDF
    Quantum Key Distribution (QKD) is a revolutionary security technology that exploits the laws of quantum mechanics to achieve information-theoretical secure key exchange. QKD is suitable for use in applications that require high security such as those found in certain commercial, governmental, and military domains. As QKD is a new technology, there is a need to develop a robust quantum communication modeling and simulation framework to support the analysis of QKD systems. This dissertation presents conceptual modeling QKD system components using the Discrete Event System Specification (DEVS) formalism to assure the component models are provably composable and exhibit temporal behavior independent of the simulation environment. These attributes enable users to assemble and simulate any collection of compatible components to represent QKD system architectures. The developed models demonstrate closure under coupling and exhibit behavior suitable for the intended analytic purpose, thus improving the validity of the simulation. This research contributes to the validity of the QKD simulation, increasing developer and user confidence in the correctness of the models and providing a composable, canonical basis for performance analysis efforts. The research supports the efficient modeling, simulation, and analysis of QKD systems when evaluating existing systems or developing next generation QKD cryptographic systems

    The pressure moments for two rigid spheres in low-Reynolds-number flow

    Get PDF
    The pressure moment of a rigid particle is defined to be the trace of the first moment of the surface stress acting on the particle. A Faxén law for the pressure moment of one spherical particle in a general low-Reynolds-number flow is found in terms of the ambient pressure, and the pressure moments of two rigid spheres immersed in a linear ambient flow are calculated using multipole expansions and lubrication theory. The results are expressed in terms of resistance functions, following the practice established in other interaction studies. The osmotic pressure in a dilute colloidal suspension at small Péclet number is then calculated, to second order in particle volume fraction, using these resistance functions. In a second application of the pressure moment, the suspension or particle-phase pressure, used in two-phase flow modeling, is calculated using Stokesian dynamics and results for the suspension pressure for a sheared cubic lattice are reported

    Communication and Monetary Policy

    Get PDF
    One role of monetary policy is to coordinate expectations in the economy and greater transparency of monetary policy may lead to greater coordination. But if transparentCommunication, Monetary policy, Transparency, Common knowledge

    Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction

    Full text link
    With the unprecedented photometric precision of the Kepler Spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends and other instrumental signatures, obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods such as SYSREM and TFA. Variable stars are often of particular astrophysical interest so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian Maximum A Posteriori (MAP) approach where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is in turn used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF) which when maximized finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves.Comment: 43 pages, 21 figures, Submitted for publication in PASP. Also see companion paper "Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves" by Martin C. Stumpe, et a

    Effects of the functional Gpc-­B1 allele on soft durum wheat grain, milling, flour, dough, and breadmaking quality

    Get PDF
    Background and objectives: Utilization of durum wheat (Triticum turgidum subsp. durum) can be enhanced by increasing grain and flour protein content. One strategy to increase protein content is by introducing the functional Gpc-B1 allele from wild emmer (Triticum turgidum subsp. dicoccoides). Findings: Introduction of the functional Gpc-B1 allele into soft kernel durum increased grain and flour protein by 17 g/kg, increased dough strength as evidenced by SDS sedimentation volume and Mixograph dough mixing parameters, and increased straight-dough pan bread volume. When grown under arid conditions, high protein (151 g/kg) samples had decreased loaf volumes indicative of inelastic doughs. The functional Gpc-B1 allele was associated with decreased test weight, a small increase in SKCS hardness, and a modest increase in flour ash; otherwise, milling performance was not affected. Conclusions: Introgression of the Gpc-B1 functional allele from dicoccoides into durum wheat can improve dough strength and breadmaking quality. The effect tends to be consistent over environments but overall, Gpc-B1 made only a modest improvement in durum wheat breadmaking quality. Further studies with concomitant selection at other loci are needed to see the effects of Gpc-B1 among elite germplasm. Significance and novelty: Durum wheat production and consumption will increase as bread quality improves. The functional Gpc-B1 allele contributed to improved breadmaking quality. The present report is the first to examine the effect of this allele on breadmaking in durum wheat

    Relationship between urinary energy and urinary nitrogen or carbon excretion in lactating Jersey cows

    Get PDF
    Measurement of urinary energy (UE) excretion is essential to determine metabolizable energy (ME) supply. Our objectives were to evaluate the accuracy of using urinary N (UN) or C (UC) to estimate UE and ultimately improve the accuracy of estimating ME. Individual animal data (n = 433) were used from 11 studies with Jersey cows at the University of Nebraska–Lincoln, where samples were analyzed after drying (n = 299) or on an as-is basis (n = 134). Dried samples resulted in greater estimated error variance compared with as-is samples, and thus only as-is samples were used for final models. The as-is data set included a range (min to max) in dry matter intake (11.6–24.6 kg/d), N intake (282–642 g/d), UE excretion (1,390–3,160 kcal/d), UN excretion (85–220 g/d or 20.6–59.5% of N intake), and UC excretion (130–273 g/d). As indicated by a bias in residuals between observed and predicted ME as dietary crude protein (CP; range of 14.9–19.1%) increased, the National Research Council dairy model did not accurately predict ME of diets, as dietary CP varied. The relationship between UE (kcal/d) and UN (g/d) excretion was linear and had an intercept of 880 ± 140 kcal. Because an intercept of 880 is biologically unlikely, the intercept was forced through 0, resulting in linear and quadratic relationships. The regressions of UE (kcal/d) on UN (g/d) excretion were UE = 14.6 ± 0.32 × UN, and UE = 20.9 ± 1.0 × UN − 0.0357 ± 0.0056 × UN2. In the quadratic regression, UE increased, but at a diminishing rate as UN excretion increased. As UC increased, UE linearly and quadratically increased. However, error variance was greater for regression with UC compared with UN as explanatory variables (8.42 vs. 7.42% of mean UE). The use of the quadratic regression between UN and UE excretion to predict ME resulted in a slope bias in ME predictions as dietary CP increased. The linear regression between UE and UN excretion removed slope bias between predicted ME and CP, and thus may be more appropriate for predicting UE across a wider range of dietary CP. Using equations to predict UE from UN should improve our ability to predict diet ME in Jersey cows compared with calculating ME directly from digestible energy

    Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves

    Full text link
    Kepler provides light curves of 156,000 stars with unprecedented precision. However, the raw data as they come from the spacecraft contain significant systematic and stochastic errors. These errors, which include discontinuities, systematic trends, and outliers, obscure the astrophysical signals in the light curves. To correct these errors is the task of the Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline. The original version of PDC in Kepler did not meet the extremely high performance requirements for the detection of miniscule planet transits or highly accurate analysis of stellar activity and rotation. One particular deficiency was that astrophysical features were often removed as a side-effect to removal of errors. In this paper we introduce the completely new and significantly improved version of PDC which was implemented in Kepler SOC 8.0. This new PDC version, which utilizes a Bayesian approach for removal of systematics, reliably corrects errors in the light curves while at the same time preserving planet transits and other astrophysically interesting signals. We describe the architecture and the algorithms of this new PDC module, show typical errors encountered in Kepler data, and illustrate the corrections using real light curve examples.Comment: Submitted to PASP. Also see companion paper "Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction" by Jeff C. Smith et a
    • …
    corecore