465 research outputs found
Evaluation of white spot syndrome virus variable DNA loci as molecular markers of virus spread at intermediate spatiotemporal scales
Variable genomic loci have been employed in a number of molecular epidemiology studies of white spot syndrome virus (WSSV), but it is unknown which loci are suitable molecular markers for determining WSSV spread on different spatiotemporal scales. Although previous work suggests that multiple introductions of WSSV occurred in central Vietnam, it is largely uncertain how WSSV was introduced and subsequently spread. Here, we evaluate five variable WSSV DNA loci as markers of virus spread on an intermediate (i.e. regional) scale, and develop a detailed and statistically supported model for the spread of WSSV. The genotypes of 17 WSSV isolates from along the coast of Vietnam – nine of which were newly characterized in this study – were analysed to obtain sufficient samples on an intermediate scale and to allow statistical analysis. Only the ORF23/24 variable region is an appropriate marker on this scale, as geographically proximate isolates show similar deletion sizes. The ORF14/15 variable region and variable-number tandem repeat (VNTR) loci are not useful as markers on this scale. ORF14/15 may be suitable for studying larger spatiotemporal scales, whereas VNTR loci are probably suitable for smaller scales. For ORF23/24, there is a clear pattern in the spatial distribution of WSSV: the smallest genomic deletions are found in central Vietnam, and larger deletions are found in the south and the north. WSSV genomic deletions tend to increase over time with virus spread in cultured shrimp, and our data are therefore congruent with the hypothesis that WSSV was introduced in central Vietnam and then radiated ou
Recommended from our members
Photoneutron logging system for direct uranium ore-grade determination
A prototype photoneutron probe for direct uranium assay in exploratory boreholes has been built and field tested. An approx. 10-Ci /sup 124/Sb gamma-ray source together with a beryllium converter is used to produce neutrons that diffuse into the surrounding formation and cause fissions in any /sup 235/U present. The fission neutrons that return to the probe are energy analyzed and counted by a high-pressure helium detector, thus indicating the concentration of uranium. The response of the probe was measured in concrete models at the US Department of Energy (Grand Junction, Colorado) calibration facility and found to be approx. 35 counts/s for an 1% U/sub 3/O/sub 8/ concentration in an 11.4-cm-diam water-filled borehole (4.5 in.). The response is linear up to a concentration of at least 0.25% by weight U/sub 3/O/sub 8/. Effects resulting from changes in formation density, porosity, and neutron absorber content were also quantified, as well as the tool response as a function of borehole diameter and fluid. A logging vehicle was outfitted, and the photoneutron-based logging system was field tested at an exploration site near Canon City, Colorado. Logging data obtained in several open holes at this site are presented and compared to core chemical analyses and results obtained in the same holes using other logging methods. In about 1 month of field testing, the photoneutron-based uranium exploration system has proved to be simple to use and very reliable. 22 figures, 12 tables
High pressure diamond-like liquid carbon
We report density-functional based molecular dynamics simulations, that show
that, with increasing pressure, liquid carbon undergoes a gradual
transformation from a liquid with local three-fold coordination to a
'diamond-like' liquid. We demonstrate that this unusual structural change is
well reproduced by an empirical bond order potential with isotropic long range
interactions, supplemented by torsional terms. In contrast, state-of-the-art
short-range bond-order potentials do not reproduce this diamond structure. This
suggests that a correct description of long-range interactions is crucial for a
unified description of the solid and liquid phases of carbon.Comment: 4 pages, 5 figure
Developing a tool to assess trainees during crisis management training for major risks
International audienceOften based on simulation exercises, crisis management training helps prepare decision-makers to manage crises better. However, this training has definite limits in terms of trainee assessment and the feed-back given during the debriefing phase. This paper presents a method for better organising the assessment of trainees involved in a real time crisis management training exercises and for giving them feedback during the debriefing phase. The approach presented is based on creating a typology of training objectives in order to or-ganise the assessment. The assessment includes expected outcomes techniques as well as the human and or-ganisational factors that can be observed within a group. The assessment tools developed were then experi-mented within crisis management exercises completed with trainees. Beyond the basic results, these tools helped redefine the basic roles played by observers and trainers during training exercises
A global perspective of food market integration: A Review
Markets are important determinants of food availability and accessibility. The extent to which they make food available, accessible and keep prices stable depends on whether or not they are integrated. If markets are well integrated, it is assumed that market forces are working properly. Considering the importance of market integration in the food sector, a lot of research has been done to test integration of food markets. This paper analyses the state of the art research on food market integration, classifies it and poses questions that future research in this field can focus on. A total of 65 published articles on food market integration from all over the world were reviewed. All the reviewed papers were published between 1990 and 2014 in high quality journals. The search for literature was based on the keyword descriptor “food or commodity market integration/price transmission/price volatility” for selected databases and websites for a period ranging from 1990 to 2014. We selected the databases for keywords in titles, abstracts, keywords list and full text. The search produced thousands of papers. We then reviewed the full text of the papers, subject to relevance, in order to select the ones related to this study. Based on relevance and consideration of the time period for this study, we finally obtained 65 articles. We then classified the articles based on year of publication, country and source of study, data sampled, methodology adopted and findings and conclusions of the articles. Findings show that the majority of research has concentrated relatively more on identifying the degree of linkages among the markets but not on its implications. The paper also identified the following factors as very important in increasing/decreasing the degree of market integration: physical infrastructure, market institutions, information, competition, market power, trade, social capital, public/government intervention and export restrictions/ban. Based on these findings the paper recommends future research on food market integration to address these questions. How does the quality of physical infrastructure/roads affect the speed of adjustment of markets in case of a shock? How has popularity of mobile phone use among the farming communities affected the degree of food market integration? How does trust and networking among farmers and traders influence price transmission and market integration? What is the effect of export restrictions on price volatility and price transmission in food markets
Measurement of the branching fraction
The branching fraction is measured in a data sample
corresponding to 0.41 of integrated luminosity collected with the LHCb
detector at the LHC. This channel is sensitive to the penguin contributions
affecting the sin2 measurement from The
time-integrated branching fraction is measured to be . This is the most precise measurement to
date
Model-independent search for CP violation in D0→K−K+π−π+ and D0→π−π+π+π− decays
A search for CP violation in the phase-space structures of D0 and View the MathML source decays to the final states K−K+π−π+ and π−π+π+π− is presented. The search is carried out with a data set corresponding to an integrated luminosity of 1.0 fb−1 collected in 2011 by the LHCb experiment in pp collisions at a centre-of-mass energy of 7 TeV. For the K−K+π−π+ final state, the four-body phase space is divided into 32 bins, each bin with approximately 1800 decays. The p-value under the hypothesis of no CP violation is 9.1%, and in no bin is a CP asymmetry greater than 6.5% observed. The phase space of the π−π+π+π− final state is partitioned into 128 bins, each bin with approximately 2500 decays. The p-value under the hypothesis of no CP violation is 41%, and in no bin is a CP asymmetry greater than 5.5% observed. All results are consistent with the hypothesis of no CP violation at the current sensitivity
Measurement of the CP-violating phase \phi s in Bs->J/\psi\pi+\pi- decays
Measurement of the mixing-induced CP-violating phase phi_s in Bs decays is of
prime importance in probing new physics. Here 7421 +/- 105 signal events from
the dominantly CP-odd final state J/\psi pi+ pi- are selected in 1/fb of pp
collision data collected at sqrt{s} = 7 TeV with the LHCb detector. A
time-dependent fit to the data yields a value of
phi_s=-0.019^{+0.173+0.004}_{-0.174-0.003} rad, consistent with the Standard
Model expectation. No evidence of direct CP violation is found.Comment: 15 pages, 10 figures; minor revisions on May 23, 201
Search for the lepton-flavor-violating decays Bs0→e±μ∓ and B0→e±μ∓
A search for the lepton-flavor-violating decays Bs0→e±μ∓ and B0→e±μ∓ is performed with a data sample, corresponding to an integrated luminosity of 1.0 fb-1 of pp collisions at √s=7 TeV, collected by the LHCb experiment. The observed number of Bs0→e±μ∓ and B0→e±μ∓ candidates is consistent with background expectations. Upper limits on the branching fractions of both decays are determined to be B(Bs0→e±μ∓)101 TeV/c2 and MLQ(B0→e±μ∓)>126 TeV/c2 at 95% C.L., and are a factor of 2 higher than the previous bounds
Absolute luminosity measurements with the LHCb detector at the LHC
Absolute luminosity measurements are of general interest for colliding-beam
experiments at storage rings. These measurements are necessary to determine the
absolute cross-sections of reaction processes and are valuable to quantify the
performance of the accelerator. Using data taken in 2010, LHCb has applied two
methods to determine the absolute scale of its luminosity measurements for
proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In
addition to the classic "van der Meer scan" method a novel technique has been
developed which makes use of direct imaging of the individual beams using
beam-gas and beam-beam interactions. This beam imaging method is made possible
by the high resolution of the LHCb vertex detector and the close proximity of
the detector to the beams, and allows beam parameters such as positions, angles
and widths to be determined. The results of the two methods have comparable
precision and are in good agreement. Combining the two methods, an overall
precision of 3.5% in the absolute luminosity determination is reached. The
techniques used to transport the absolute luminosity calibration to the full
2010 data-taking period are presented.Comment: 48 pages, 19 figures. Results unchanged, improved clarity of Table 6,
9 and 10 and corresponding explanation in the tex
- …