431 research outputs found

    TWO STUDIES EVALUATING INPUT USE IN SOYBEAN AND COTTON PRODUCTION

    Get PDF
    Farmers are price takers for both inputs and outputs. Therefore, when the prices of inputs rise, as they have with many inputs used in agricultural production, optimal production practices may change. Two separate studies of the impacts of agricultural technology on input use in crop production were undertaken in this thesis. The first study evaluated economically optimal plant population considering seeding rate, maturity group, row spacing, and input-output prices in soybean production in the rolling uplands region of the upper Midsouthern United States. Data from field experiments at the University of Tennessee Research and Education Center at Milan, Tennessee during 2005, 2006, and 2007 were used to model yield response to plant population density (PPD). Given that farmers must make their planting decisions based on expected weather, original models were weighted by year based on the Ångström weather index. Evaluation of weighted average response functions found that maturity group IV soybean cultivars planted in 38 cm rows at seeding rates necessary to achieve final PPD of 115,000 plants ha−1 would maximize farmers returns to soybean production. The second study evaluated factors influencing cotton farmers’ decisions to adopt information technologies for variable-rate input application and subsequent perceptions of directional changes in the overall use of fertilizer in cotton. Data from the Cotton Incorporated 2009 Southern Precision Farming Survey were evaluated using probit models with sample selection given the sequential nature the adoption decision and farmer perceptions of directional changes in fertilizer use. Results suggest that cotton farmers in the sample who rented more of their cotton area and used picker harvest technology were more likely to perceive that overall fertilizer use declined with the use of the selected information technologies and VRT. This and other key findings of this research have implications for a wide range of audiences ranging from University Extension to policy makers given the economic and environmental impacts

    Essays on the economic value of genetic testing in beef cattle production

    Get PDF
    Recent advancements in genomic technology have made genetic marker panels for a variety of economically-relevant beef cattle traits commercially available. Although independent validations have found that many of these markers are correlated with the traits they are designed to predict, economists have considered few of these markers and their value to producers. The objective of this dissertation is to contribute to the understanding of the economic value of these markers.The first essay estimates the value of using information from genetic marker panels characterizing seven economically-relevant traits for management and selection of feedlot cattle. The values of using genetic information to sort cattle by optimal days-on-feed are less than 1/headforeachofthetraitsevaluated,andthevaluesassociatedwithusinggeneticinformationtoselectcattleforplacementareasmuchas1/head for each of the traits evaluated, and the values associated with using genetic information to select cattle for placement are as much as 38/head. Therefore, it would not be profitable at the current cost of testing (about 40/head)tosortcattlebyoptimaldays−on−feed,butitcouldbeprofitabletousethegenetictestsforbreedingcattleselection.Thesecondessayexaminesthepotentialtoincreasethevalueofgeneticinformationbyimprovingfedcattlemarketingdecisions.Thevalueofusinggeneticinformationtoselectivelymarketcattlerangesfrom40/head) to sort cattle by optimal days-on-feed, but it could be profitable to use the genetic tests for breeding cattle selection.The second essay examines the potential to increase the value of genetic information by improving fed cattle marketing decisions. The value of using genetic information to selectively market cattle ranges from 1-13/headdependingonhowaproducercurrentlymarketstheircattleandthegridstructure.Althoughthesevaluesaregenerallyhigherthanthosereportedinpreviousresearch,theyarestillnotenoughtooffsetthecurrentcostofgenetictesting.Thethirdessayevaluatesthepotentialforreducingtheoverallcostofgenetictestingbyassumingthat,insteadoftestingeachindividualanimal,arandomsampleofanimalscouldbetestedtomeasurethegeneticpotentialofthegroup.UsingafullyBayesianapproach,wedeterminethatanoptimalsamplesizeof10outof100animalsgeneratedreturnsfromsamplingofnearly13/head depending on how a producer currently markets their cattle and the grid structure. Although these values are generally higher than those reported in previous research, they are still not enough to offset the current cost of genetic testing.The third essay evaluates the potential for reducing the overall cost of genetic testing by assuming that, instead of testing each individual animal, a random sample of animals could be tested to measure the genetic potential of the group. Using a fully Bayesian approach, we determine that an optimal sample size of 10 out of 100 animals generated returns from sampling of nearly 10/head. Although sensitivity analysis suggests that these values will vary depending on the particular pen of cattle, results indicate that random sampling has the potential to provide a context in which the benefits of genetic testing outweigh the costs

    On the consistency of neutron-star radius measurements from thermonuclear bursts

    Full text link
    The radius of neutron stars can in principle be measured via the normalisation of a blackbody fitted to the X-ray spectrum during thermonuclear (type-I) X-ray bursts, although few previous studies have addressed the reliability of such measurements. Here we examine the apparent radius in a homogeneous sample of long, mixed H/He bursts from the low-mass X-ray binaries GS 1826-24 and KS 1731-26. The measured blackbody normalisation (proportional to the emitting area) in these bursts is constant over a period of up to 60s in the burst tail, even though the flux (blackbody temperature) decreased by a factor of 60-75% (30-40%). The typical rms variation in the mean normalisation from burst to burst was 3-5%, although a variation of 17% was found between bursts observed from GS 1826-24 in two epochs. A comparison of the time-resolved spectroscopic measurements during bursts from the two epochs shows that the normalisation evolves consistently through the burst rise and peak, but subsequently increases further in the earlier epoch bursts. The elevated normalisation values may arise from a change in the anisotropy of the burst emission, or alternatively variations in the spectral correction factor, f_c, of order 10%. Since burst samples observed from systems other than GS 1826-24 are more heterogeneous, we expect that systematic uncertainties of at least 10% are likely to apply generally to measurements of neutron-star radii, unless the effects described here can be corrected for.Comment: 9 pages, 6 figures; accepted by Ap

    Cell-specific discrimination of desmosterol and desmosterol mimetics confers selective regulation of LXR and SREBP in macrophages.

    Get PDF
    Activation of liver X receptors (LXRs) with synthetic agonists promotes reverse cholesterol transport and protects against atherosclerosis in mouse models. Most synthetic LXR agonists also cause marked hypertriglyceridemia by inducing the expression of sterol regulatory element-binding protein (SREBP)1c and downstream genes that drive fatty acid biosynthesis. Recent studies demonstrated that desmosterol, an intermediate in the cholesterol biosynthetic pathway that suppresses SREBP processing by binding to SCAP, also binds and activates LXRs and is the most abundant LXR ligand in macrophage foam cells. Here we explore the potential of increasing endogenous desmosterol production or mimicking its activity as a means of inducing LXR activity while simultaneously suppressing SREBP1c-induced hypertriglyceridemia. Unexpectedly, while desmosterol strongly activated LXR target genes and suppressed SREBP pathways in mouse and human macrophages, it had almost no activity in mouse or human hepatocytes in vitro. We further demonstrate that sterol-based selective modulators of LXRs have biochemical and transcriptional properties predicted of desmosterol mimetics and selectively regulate LXR function in macrophages in vitro and in vivo. These studies thereby reveal cell-specific discrimination of endogenous and synthetic regulators of LXRs and SREBPs, providing a molecular basis for dissociation of LXR functions in macrophages from those in the liver that lead to hypertriglyceridemia

    Mitochondrial Release of Caspase-2 and -9 during the Apoptotic Process

    Get PDF
    The barrier function of mitochondrial membranes is perturbed early during the apoptotic process. Here we show that the mitochondria contain a caspase-like enzymatic activity cleaving the caspase substrate Z-VAD.afc, in addition to three biological activities previously suggested to participate in the apoptotic process: (a) cytochrome c; (b) an apoptosis-inducing factor (AIF) which causes isolated nuclei to undergo apoptosis in vitro; and (c) a DNAse activity. All of these factors, which are biochemically distinct, are released upon opening of the permeability transition (PT) pore in a coordinate, Bcl-2–inhibitable fashion. Caspase inhibitors fully neutralize the Z-VAD.afc–cleaving activity, have a limited effect on the AIF activity, and have no effect at all on the DNase activities. Purification of proteins reacting with the biotinylated caspase substrate Z-VAD, immunodetection, and immunodepletion experiments reveal the presence of procaspase-2 and -9 in mitochondria. Upon induction of PT pore opening, these procaspases are released from purified mitochondria and become activated. Similarly, upon induction of apoptosis, both procaspases redistribute from the mitochondrion to the cytosol and are processed to generate enzymatically active caspases. This redistribution is inhibited by Bcl-2. Recombinant caspase-2 and -9 suffice to provoke full-blown apoptosis upon microinjection into cells. Altogether, these data suggest that caspase-2 and -9 zymogens are essentially localized in mitochondria and that the disruption of the outer mitochondrial membrane occurring early during apoptosis may be critical for their subcellular redistribution and activation

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNetÂź convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNetÂź model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery

    Multidifferential study of identified charged hadron distributions in ZZ-tagged jets in proton-proton collisions at s=\sqrt{s}=13 TeV

    Full text link
    Jet fragmentation functions are measured for the first time in proton-proton collisions for charged pions, kaons, and protons within jets recoiling against a ZZ boson. The charged-hadron distributions are studied longitudinally and transversely to the jet direction for jets with transverse momentum 20 <pT<100< p_{\textrm{T}} < 100 GeV and in the pseudorapidity range 2.5<η<42.5 < \eta < 4. The data sample was collected with the LHCb experiment at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 1.64 fb−1^{-1}. Triple differential distributions as a function of the hadron longitudinal momentum fraction, hadron transverse momentum, and jet transverse momentum are also measured for the first time. This helps constrain transverse-momentum-dependent fragmentation functions. Differences in the shapes and magnitudes of the measured distributions for the different hadron species provide insights into the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb public pages

    Study of the B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} decay

    Full text link
    The decay B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} is studied in proton-proton collisions at a center-of-mass energy of s=13\sqrt{s}=13 TeV using data corresponding to an integrated luminosity of 5 fb−1\mathrm{fb}^{-1} collected by the LHCb experiment. In the Λc+K−\Lambda_{c}^+ K^{-} system, the Ξc(2930)0\Xi_{c}(2930)^{0} state observed at the BaBar and Belle experiments is resolved into two narrower states, Ξc(2923)0\Xi_{c}(2923)^{0} and Ξc(2939)0\Xi_{c}(2939)^{0}, whose masses and widths are measured to be m(Ξc(2923)0)=2924.5±0.4±1.1 MeV,m(Ξc(2939)0)=2938.5±0.9±2.3 MeV,Γ(Ξc(2923)0)=0004.8±0.9±1.5 MeV,Γ(Ξc(2939)0)=0011.0±1.9±7.5 MeV, m(\Xi_{c}(2923)^{0}) = 2924.5 \pm 0.4 \pm 1.1 \,\mathrm{MeV}, \\ m(\Xi_{c}(2939)^{0}) = 2938.5 \pm 0.9 \pm 2.3 \,\mathrm{MeV}, \\ \Gamma(\Xi_{c}(2923)^{0}) = \phantom{000}4.8 \pm 0.9 \pm 1.5 \,\mathrm{MeV},\\ \Gamma(\Xi_{c}(2939)^{0}) = \phantom{00}11.0 \pm 1.9 \pm 7.5 \,\mathrm{MeV}, where the first uncertainties are statistical and the second systematic. The results are consistent with a previous LHCb measurement using a prompt Λc+K−\Lambda_{c}^{+} K^{-} sample. Evidence of a new Ξc(2880)0\Xi_{c}(2880)^{0} state is found with a local significance of 3.8 σ3.8\,\sigma, whose mass and width are measured to be 2881.8±3.1±8.5 MeV2881.8 \pm 3.1 \pm 8.5\,\mathrm{MeV} and 12.4±5.3±5.8 MeV12.4 \pm 5.3 \pm 5.8 \,\mathrm{MeV}, respectively. In addition, evidence of a new decay mode Ξc(2790)0→Λc+K−\Xi_{c}(2790)^{0} \to \Lambda_{c}^{+} K^{-} is found with a significance of 3.7 σ3.7\,\sigma. The relative branching fraction of B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} with respect to the B−→D+D−K−B^{-} \to D^{+} D^{-} K^{-} decay is measured to be 2.36±0.11±0.22±0.252.36 \pm 0.11 \pm 0.22 \pm 0.25, where the first uncertainty is statistical, the second systematic and the third originates from the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb public pages

    Measurement of the ratios of branching fractions R(D∗)\mathcal{R}(D^{*}) and R(D0)\mathcal{R}(D^{0})

    Full text link
    The ratios of branching fractions R(D∗)≡B(Bˉ→D∗τ−Μˉτ)/B(Bˉ→D∗Ό−ΜˉΌ)\mathcal{R}(D^{*})\equiv\mathcal{B}(\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}\to D^{*}\mu^{-}\bar{\nu}_{\mu}) and R(D0)≡B(B−→D0τ−Μˉτ)/B(B−→D0Ό−ΜˉΌ)\mathcal{R}(D^{0})\equiv\mathcal{B}(B^{-}\to D^{0}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(B^{-}\to D^{0}\mu^{-}\bar{\nu}_{\mu}) are measured, assuming isospin symmetry, using a sample of proton-proton collision data corresponding to 3.0 fb−1{ }^{-1} of integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The tau lepton is identified in the decay mode τ−→Ό−ΜτΜˉΌ\tau^{-}\to\mu^{-}\nu_{\tau}\bar{\nu}_{\mu}. The measured values are R(D∗)=0.281±0.018±0.024\mathcal{R}(D^{*})=0.281\pm0.018\pm0.024 and R(D0)=0.441±0.060±0.066\mathcal{R}(D^{0})=0.441\pm0.060\pm0.066, where the first uncertainty is statistical and the second is systematic. The correlation between these measurements is ρ=−0.43\rho=-0.43. Results are consistent with the current average of these quantities and are at a combined 1.9 standard deviations from the predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb public pages
    • 

    corecore