427 research outputs found

    Role of dietary concentrates on the venison quality of Sika deer (Cervus nippon)

    Get PDF
    The aim of this study was to evaluate the effect of feed concentrate level on carcass characteristics and meat quality of sika deer (Cervus nippon). A total of 16 sika deer (mean bodyweight 30 kg) were randomly assigned to one of two treatments, namely Treatment 1 (T1): fed concentrate at 1.5% of total bodyweight); and Treatment 2 (T2): fed concentrate ad libitum. Both groups had free access to roughage (hay) and water. Each group was fed concentrate twice daily (at 09h00 and 16h00) for eight months. The fat concentration of venison from deer in T2 was significantly greater than that of T1. However, fat loss through cooking, shear force, and pH did not differ significantly between the two groups. Water-binding capacity of venison from deer in T1 was significantly greater (2.83%) than that of T2. Colour parameters a* (redness) and b* (yellowness) were significantly greater for venison from T2 than for deer in T1. Likewise, cholesterol concentration of venison from deer in T2 was significantly greater than for deer in T1. However, the cholesterol concentration in venison from deer that were subjected to both treatments was less than that of meat from other livestock species. In conclusion, the results of this study provide a baseline to estimate fodder cost standards to produce sika deer venison, and would aid commercial deer farmers in developing optimal management strategies for venison production.Keywords: carcass composition, concentrate feeding, meat qualit

    Radiative and Collisional Energy Loss, and Photon-Tagged Jets at RHIC

    Full text link
    The suppression of single jets at high transverse momenta in a quark-gluon plasma is studied at RHIC energies, and the additional information provided by a photon tag is included. The energy loss of hard jets traversing through the medium is evaluated in the AMY formalism, by consistently taking into account the contributions from radiative events and from elastic collisions at leading order in the coupling. The strongly-interacting medium in these collisions is modelled with (3+1)-dimensional ideal relativistic hydrodynamics. Putting these ingredients together with a complete set of photon-production processes, we present a calculation of the nuclear modification of single jets and photon-tagged jets at RHIC.Comment: 4 pages, 4 figures, contributed to the 3rd International Conference on Hard and Electro-Magnetic Probes of High-Energy Nuclear Collisions (Hard Probes 2008), typos corrected, published versio

    Impact of analytic provenance in genome analysis

    Get PDF
    Many computational methods are available for assembly and annotation of newly sequenced microbial genomes. However, when new genomes are reported in the literature, there is frequently very little critical analysis of choices made during the sequence assembly and gene annotation stages. These choices have a direct impact on the biologically relevant products of a genomic analysis - for instance identification of common and differentiating regions among genomes in a comparison, or identification of enriched gene functional categories in a specific strain. Here, we examine the outcomes of different assembly and analysis steps in typical workflows in a comparison among strains of Vibrio vulnificus

    Time and Amplitude of Afterpulse Measured with a Large Size Photomultiplier Tube

    Full text link
    We have studied the afterpulse of a hemispherical photomultiplier tube for an upcoming reactor neutrino experiment. The timing, the amplitude, and the rate of the afterpulse for a 10 inch photomultiplier tube were measured with a 400 MHz FADC up to 16 \ms time window after the initial signal generated by an LED light pulse. The time and amplitude correlation of the afterpulse shows several distinctive groups. We describe the dependencies of the afterpulse on the applied high voltage and the amplitude of the main light pulse. The present data could shed light upon the general mechanism of the afterpulse.Comment: 11 figure

    Ab initio study of charge doping effect on 1D polymerization of C60

    Full text link
    We study the interplay between charge doping and intermolecular distance in the polymerization of C60 fullerene chains by means of density functional theory (DFT)-based first principle calculations. The potential energy surface analysis shows that both the equilibrium intermolecular distance of the unpolymerized system and the polymerization energy barrier are inversely proportional to the electron doping of the system. We analyze the origin of this charge-induced polymerization effect by studying the behavior of the system's wavefunctions around the Fermi level and the structural modifications of the molecules as a function of two variables: the distance between the centers of the molecules and the number of electrons added to the system

    Multiplicity Distributions in Canonical and Microcanonical Statistical Ensembles

    Full text link
    The aim of this paper is to introduce a new technique for calculation of observables, in particular multiplicity distributions, in various statistical ensembles at finite volume. The method is based on Fourier analysis of the grand canonical partition function. Taylor expansion of the generating function is used to separate contributions to the partition function in their power in volume. We employ Laplace's asymptotic expansion to show that any equilibrium distribution of multiplicity, charge, energy, etc. tends to a multivariate normal distribution in the thermodynamic limit. Gram-Charlier expansion allows additionally for calculation of finite volume corrections. Analytical formulas are presented for inclusion of resonance decay and finite acceptance effects directly into the system partition function. This paper consolidates and extends previously published results of current investigation into properties of statistical ensembles.Comment: 53 pages, 7 figure

    Matter-Antimatter Asymmetry in the Large Hadron Collider

    Full text link
    The matter-antimatter asymmetry is one of the greatest challenges in the modern physics. The universe including this paper and even the reader him(her)self seems to be built up of ordinary matter only. Theoretically, the well-known Sakharov's conditions remain the solid framework explaining the circumstances that matter became dominant against the antimatter while the universe cools down and/or expands. On the other hand, the standard model for elementary particles apparently prevents at least two conditions out of them. In this work, we introduce a systematic study of the antiparticle-to-particle ratios measured in various NNNN and AAAA collisions over the last three decades. It is obvious that the available experimental facilities turn to be able to perform nuclear collisions, in which the matter-antimatter asymmetry raises from 0\sim 0% at AGS to 100\sim 100% at LHC. Assuming that the final state of hadronization in the nuclear collisions takes place along the freezeout line, which is defined by a constant entropy density, various antiparticle-to-particle ratios are studied in framework of the hadron resonance gas (HRG) model. Implementing modified phase space and distribution function in the grand-canonical ensemble and taking into account the experimental acceptance, the ratios of antiparticle-to-particle over the whole range of center-of-mass-energies are very well reproduced by the HRG model. Furthermore, the antiproton-to-proton ratios measured by ALICE in pppp collisions is also very well described by the HRG model. It is likely to conclude that the LHC heavy-ion program will produce the same particle ratios as the pppp program implying the dynamics and evolution of the system would not depend on the initial conditions. The ratios of bosons and baryons get very close to unity indicating that the matter-antimatter asymmetry nearly vanishes at LHC.Comment: 9 pages, 5 eps-figures, revtex4-styl

    Comparing benefits from many possible computed tomography lung cancer screening programs: Extrapolating from the National Lung Screening Trial using comparative modeling

    Get PDF
    Background: The National Lung Screening Trial (NLST) demonstrated that in current and former smokers aged 55 to 74 years, with at least 30 pack-years of cigarette smoking history and who had quit smoking no more than 15 years ago, 3 annual computed tomography (CT) screens reduced lung cancer-specific mortality by 20% relative to 3 annual chest X-ray screens. We compared the benefits achievable with 576 lung cancer screening programs that varied CT screen number and frequency, ages of screening, and eligibility based on smoking. Methods and Findings: We used five independent microsimulation models with lung cancer natural history parameters previously calibrated to the NLST to simulate life histories of the US cohort born in 1950 under all 576 programs. 'Efficient' (within model) programs prevented the greatest number of lung cancer deaths, compared to no screening, for a given number of CT screens. Among 120 'consensus efficient' (identified as efficient across models) programs, the average starting age was 55 years, the stopping age was 80 or 85 years, the average minimum pack-years was 27, and the maximum years since quitting was 20. Among consensus efficient programs, 11% to 40% of the cohort was screened, and 153 to 846 lung cancer deaths were averted per 100,000 people. In all models, annual screening based on age and smoking eligibility in NLST was not efficient; continuing screening to age 80 or 85 years was more efficient. Conclusions: Consensus results from five models identified a set of efficient screening programs that include annual CT lung cancer screening using criteria like NLST eligibility but extended to older ages. Guidelines for screening should also consider harms of screening and individual patient characteristics

    Polymer-stable magnesium nanocomposites prepared by laser ablation for efficient hydrogen storage

    Get PDF
    Hydrogen is a promising alternative energy carrier that can potentially facilitate the transition from fossil fuels to sources of clean energy because of its prominent advantages such as high energy density (142 MJ per kg), great variety of potential sources (for example water, biomass, organic matter), and low environmental impact (water is the sole combustion product). However, due to its light weight, the efficient storage of hydrogen is still an issue investigated intensely. Various solid media have been considered in that respect among which magnesium hydride stands out as a candidate offering distinct advantages. Recent theoretical work indicates that MgH2 becomes less thermodynamically stable as particle diameter decreases below 2 nm. Our DFT (density functional theory) modeling studies have shown that the smallest enthalpy change, corresponding to 2 unit-cell thickness (1.6 {\AA} Mg/3.0{\AA} MgH2) of the film, is 57.7 kJ/molMg. This enthalpy change is over 10 kJ per molMg smaller than that of the bulk. It is important to note that the range of enthalpy change for systems that are suitable for mobile storage applications is 15 to 24 kJ permolH at 298 K. The important key for the development of air/stable Mg/nanocrystals is the use of PMMA (polymethylmethacrylate) as an encapsulation agent. In our work we use laser ablation, a non-electrochemical method, for producing well dispersed nanoparticles without the presence of any long range aggregation. The observed improved hydrogenation characteristics of the polymer/stable Mg-nanoparticles are associated to the preparation procedure and in any case the polymer laser ablation is a new approach for the production of air/protected and inexpensive Mg/nanoparticles.Comment: Hydrogen Storage, Mg - Nanoparticles, Polymer Matrix Composites, Laser Ablation, to appear in International Journal of Hydrogen Energy, 201
    corecore