56 research outputs found

    Principle of Balance and the Sea Content of the Proton

    Full text link
    In this study, the proton is taken as an ensemble of quark-gluon Fock states. Using the principle of balance that every Fock state should be balanced with all of the nearby Fock states (denoted as the balance model), instead of the principle of detailed balance that any two nearby Fock states should be balanced with each other (denoted as the detailed balance model), the probabilities of finding every Fock state of the proton are obtained. The balance model can be taken as a revised version of the detailed balance model, which can give an excellent description of the light flavor sea asymmetry (i.e., uˉdˉ\bar{u}\not= \bar{d}) without any parameter. In case of gggg\Leftrightarrow gg sub-processes not considered, the balance model and the detailed balance model give the same results. In case of gggg\Leftrightarrow gg sub-processes considered, there is about 10 percent difference between the results of these models. We also calculate the strange content of the proton using the balance model under the equal probability assumption.Comment: 32 latex pages, 4 ps figures, to appear in PR

    Simulation methods in the healthcare systems

    Get PDF
    International audienceHealthcare systems can be considered as large-scale complex systems. They need to be well managed in order to create the desired values for its stakeholders as the patients, the medical staff and the industrials working for healthcare. Many simulation methods coming from other sectors have already proved their added value for healthcare. However, based on our experience in the French heath sector (Jean et al. 2012), we found these methods are not widely used in comparison with other areas as manufacturing and logistic. This paper presents a literature review of the healthcare issue and major simulations methods used to address them. This work is design to suggest how more systematic creation of solutions may be performed using complementary methods to resolve a common issue. We believe that this first work can help to better understand the simulation approaches used for health workers, deciders or researchers of any responsibility level

    The High Redshift Integrated Sachs-Wolfe Effect

    Full text link
    In this paper we rely on the quasar (QSO) catalog of the Sloan Digital Sky Survey Data Release Six (SDSS DR6) of about one million photometrically selected QSOs to compute the Integrated Sachs-Wolfe (ISW) effect at high redshift, aiming at constraining the behavior of the expansion rate and thus the behaviour of dark energy at those epochs. This unique sample significantly extends previous catalogs to higher redshifts while retaining high efficiency in the selection algorithm. We compute the auto-correlation function (ACF) of QSO number density from which we extract the bias and the stellar contamination. We then calculate the cross-correlation function (CCF) between QSO number density and Cosmic Microwave Background (CMB) temperature fluctuations in different subsamples: at high z>1.5 and low z<1.5 redshifts and for two different choices of QSO in a conservative and in a more speculative analysis. We find an overall evidence for a cross-correlation different from zero at the 2.7\sigma level, while this evidence drops to 1.5\sigma at z>1.5. We focus on the capabilities of the ISW to constrain the behaviour of the dark energy component at high redshift both in the \LambdaCDM and Early Dark Energy cosmologies, when the dark energy is substantially unconstrained by observations. At present, the inclusion of the ISW data results in a poor improvement compared to the obtained constraints from other cosmological datasets. We study the capabilities of future high-redshift QSO survey and find that the ISW signal can improve the constraints on the most important cosmological parameters derived from Planck CMB data, including the high redshift dark energy abundance, by a factor \sim 1.5.Comment: 20 pages, 18 figures, and 7 table

    Exploring morphological correlations among H2CO, 12CO, MSX and continuum mappings

    Full text link
    There are relatively few H2CO mappings of large-area giant molecular cloud (GMCs). H2CO absorption lines are good tracers for low-temperature molecular clouds towards star formation regions. Thus, the aim of the study was to identify H2CO distributions in ambient molecular clouds. We investigated morphologic relations among 6-cm continuum brightness temperature (CBT) data and H2CO (111-110; Nanshan 25-m radio telescope), 12CO (1--0; 1.2-m CfA telescope) and midcourse space experiment (MSX) data, and considered the impact of background components on foreground clouds. We report simultaneous 6-cm H2CO absorption lines and H110\alpha radio recombination line observations and give several large-area mappings at 4.8 GHz toward W49 (50'\times50'), W3 (70'\times90'), DR21/W75 (60'\times90') and NGC2024/NGC2023 (50'\times100') GMCs. By superimposing H2CO and 12CO contours onto the MSX color map, we can compare correlations. The resolution for H2CO, 12CO and MSX data was about 10', 8' and 18.3", respectively. Comparison of H2CO and 12CO contours, 8.28-\mu m MSX colorscale and CBT data revealed great morphological correlation in the large area, although there are some discrepancies between 12CO and H2CO peaks in small areas. The NGC2024/NGC2023 GMC is a large area of HII regions with a high CBT, but a H2CO cloud to the north is possible against the cosmic microwave background. A statistical diagram shows that 85.21% of H2CO absorption lines are distributed in the intensity range from -1.0 to 0 Jy and the \Delta V range from 1.206 to 5 km/s.Comment: 18 pages, 22 figures, 5 tables. Accepted to be published in Astrophysics and Space Scienc

    Constraining Primordial Non-Gaussianity with High-Redshift Probes

    Get PDF
    We present an analysis of the constraints on the amplitude of primordial non-Gaussianity of local type described by the dimensionless parameter fNLf_{\rm NL}. These constraints are set by the auto-correlation functions (ACFs) of two large scale structure probes, the radio sources from NRAO VLA Sky Survey (NVSS) and the quasar catalogue of Sloan Digital Sky Survey Release Six (SDSS DR6 QSOs), as well as by their cross-correlation functions (CCFs) with the cosmic microwave background (CMB) temperature map (Integrated Sachs-Wolfe effect). Several systematic effects that may affect the observational estimates of the ACFs and of the CCFs are investigated and conservatively accounted for. Our approach exploits the large-scale scale-dependence of the non-Gaussian halo bias. The derived constraints on {fNLf_{\rm NL}} coming from the NVSS CCF and from the QSO ACF and CCF are weaker than those previously obtained from the NVSS ACF, but still consistent with them. Finally, we obtain the constraints on fNL=53±25f_{\rm NL}=53\pm25 (1σ1\,\sigma) and fNL=58±24f_{\rm NL}=58\pm24 (1σ1\,\sigma) from NVSS data and SDSS DR6 QSO data, respectively.Comment: 16 pages, 8 figures, 1 table, Accepted for publication on JCA

    Narrow width of a glueball decay into two mesons

    Get PDF
    The widths of a glueball decay to two pions or kaons are analyzed in the pQCD framework. Our results show that the glueball ground state has small branching ratio for two-meson decay mode, which is around 10210^{-2}. The predicted values are consistent with the data of ξππ,KK\xi\to\pi\pi, KK if ξ\xi particle is a 2++2^{++} glueball. Applicability of pQCD to the glueball decay and comparison with χcJ\chi_{cJ} decay are also discussed.Comment: 12 pages, revtex, 2 ps figure

    Evolution of the electronic structure with size in II-VI semiconductor nanocrystals

    Get PDF
    In order to provide a quantitatively accurate description of the band gap variation with sizes in various II-VI semiconductor nanocrystals, we make use of the recently reported tight-binding parametrization of the corresponding bulk systems. Using the same tight-binding scheme and parameters, we calculate the electronic structure of II-VI nanocrystals in real space with sizes ranging between 5 and 80 {\AA} in diameter. A comparison with available experimental results from the literature shows an excellent agreement over the entire range of sizes.Comment: 17 pages, 4 figures, accepted in Phys. Rev.

    Methods for biogeochemical studies of sea ice: The state of the art, caveats, and recommendations

    Get PDF
    Over the past two decades, with recognition that the ocean’s sea-ice cover is neither insensitive to climate change nor a barrier to light and matter, research in sea-ice biogeochemistry has accelerated significantly, bringing together a multi-disciplinary community from a variety of fields. This disciplinary diversity has contributed a wide range of methodological techniques and approaches to sea-ice studies, complicating comparisons of the results and the development of conceptual and numerical models to describe the important biogeochemical processes occurring in sea ice. Almost all chemical elements, compounds, and biogeochemical processes relevant to Earth system science are measured in sea ice, with published methods available for determining biomass, pigments, net community production, primary production, bacterial activity, macronutrients, numerous natural and anthropogenic organic compounds, trace elements, reactive and inert gases, sulfur species, the carbon dioxide system parameters, stable isotopes, and water-ice-atmosphere fluxes of gases, liquids, and solids. For most of these measurements, multiple sampling and processing techniques are available, but to date there has been little intercomparison or intercalibration between methods. In addition, researchers collect different types of ancillary data and document their samples differently, further confounding comparisons between studies. These problems are compounded by the heterogeneity of sea ice, in which even adjacent cores can have dramatically different biogeochemical compositions. We recommend that, in future investigations, researchers design their programs based on nested sampling patterns, collect a core suite of ancillary measurements, and employ a standard approach for sample identification and documentation. In addition, intercalibration exercises are most critically needed for measurements of biomass, primary production, nutrients, dissolved and particulate organic matter (including exopolymers), the CO2 system, air-ice gas fluxes, and aerosol production. We also encourage the development of in situ probes robust enough for long-term deployment in sea ice, particularly for biological parameters, the CO2 system, and other gases

    Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel

    Get PDF
    A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. © 2014 Macmillan Publishers Limited. All rights reserved
    corecore