480 research outputs found

    Feasibility study and performance evaluation of a GPFS based storage area accessible simultaneously through ownCloud and StoRM

    Get PDF
    In this work, we demonstrate the feasibility of a GPFS based storage area accessible through both ownCloud and StoRM, as solution which allows VO's with storage manager service to access directly the files through personal computer. Furthermore, in order to study its performance, in load situation, we set up two load balanced ownCloud web servers and a front-end, back-end and gridFTP StoRM server. A Python script was also developed to simulate user performing several file transfers both with ownCloud and StoRM. We employed the average file transfer time and efficiency as a figure of merit, which was measured as a function of the number of parallel running scripts. We observed that the average time increases almost linearly with the number of parallel running scripts, independently of the used software

    Improved Cloud resource allocation: how INDIGO-Datacloud is overcoming the current limitations in Cloud schedulers

    Get PDF
    Trabajo presentado a: 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP2016) 10–14 October 2016, San Francisco.Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.The authors want to acknowledge the support of the INDIGO-DataCloud (grant number 653549) project, funded by the European Commission’s Horizon 2020 Framework Programme.Peer Reviewe

    Enteral Multiple Micronutrient Supplementation in Preterm and Low Birth Weight Infants: A Systematic Review and Meta-analysis

    Get PDF
    OBJECTIVES To assess effects of supplementation with 3 or more micronutrients (multiple micronutrients; MMN) compared to no MMN in human milk-fed preterm and low birth weight (LBW) infants. RESULTS Data on a subgroup of 414 preterm or LBW infants from 2 randomized controlled trials (4 reports) were included. The certainty of evidence ranged from low to very low. For growth outcomes in the MMN compared to the non-MMN group, there was a small increase in weight-for-age (2 trials, 383 participants) and height-for-age z-scores (2 trials, 372 participants); a small decrease in wasting (2 trials, 398 participants); small increases in stunting (2 trials, 399 participants); and an increase in underweight (2 trials, 396 participants). For neurodevelopment outcomes at 78 weeks, we found small increases in Bayley Scales of Infant Development, Version III (BISD-III), scores (cognition, receptive language, expressive language, fine motor, gross motor) in the MMN compared to the non-MMN group (1 trial, 27 participants). There were no studies examining dose or timing of supplementation. CONCLUSIONS Evidence is insufficient to determine whether enteral MMN supplementation to preterm or LBW infants who are fed mother's own milk is associated with benefit or harm. More trials are needed to generate evidence on mortality, morbidity, growth, and neurodevelopment.publishedVersio

    Multidifferential study of identified charged hadron distributions in ZZ-tagged jets in proton-proton collisions at s=\sqrt{s}=13 TeV

    Full text link
    Jet fragmentation functions are measured for the first time in proton-proton collisions for charged pions, kaons, and protons within jets recoiling against a ZZ boson. The charged-hadron distributions are studied longitudinally and transversely to the jet direction for jets with transverse momentum 20 <pT<100< p_{\textrm{T}} < 100 GeV and in the pseudorapidity range 2.5<η<42.5 < \eta < 4. The data sample was collected with the LHCb experiment at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 1.64 fb−1^{-1}. Triple differential distributions as a function of the hadron longitudinal momentum fraction, hadron transverse momentum, and jet transverse momentum are also measured for the first time. This helps constrain transverse-momentum-dependent fragmentation functions. Differences in the shapes and magnitudes of the measured distributions for the different hadron species provide insights into the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb public pages

    Study of the B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} decay

    Full text link
    The decay B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} is studied in proton-proton collisions at a center-of-mass energy of s=13\sqrt{s}=13 TeV using data corresponding to an integrated luminosity of 5 fb−1\mathrm{fb}^{-1} collected by the LHCb experiment. In the Λc+K−\Lambda_{c}^+ K^{-} system, the Ξc(2930)0\Xi_{c}(2930)^{0} state observed at the BaBar and Belle experiments is resolved into two narrower states, Ξc(2923)0\Xi_{c}(2923)^{0} and Ξc(2939)0\Xi_{c}(2939)^{0}, whose masses and widths are measured to be m(Ξc(2923)0)=2924.5±0.4±1.1 MeV,m(Ξc(2939)0)=2938.5±0.9±2.3 MeV,Γ(Ξc(2923)0)=0004.8±0.9±1.5 MeV,Γ(Ξc(2939)0)=0011.0±1.9±7.5 MeV, m(\Xi_{c}(2923)^{0}) = 2924.5 \pm 0.4 \pm 1.1 \,\mathrm{MeV}, \\ m(\Xi_{c}(2939)^{0}) = 2938.5 \pm 0.9 \pm 2.3 \,\mathrm{MeV}, \\ \Gamma(\Xi_{c}(2923)^{0}) = \phantom{000}4.8 \pm 0.9 \pm 1.5 \,\mathrm{MeV},\\ \Gamma(\Xi_{c}(2939)^{0}) = \phantom{00}11.0 \pm 1.9 \pm 7.5 \,\mathrm{MeV}, where the first uncertainties are statistical and the second systematic. The results are consistent with a previous LHCb measurement using a prompt Λc+K−\Lambda_{c}^{+} K^{-} sample. Evidence of a new Ξc(2880)0\Xi_{c}(2880)^{0} state is found with a local significance of 3.8 σ3.8\,\sigma, whose mass and width are measured to be 2881.8±3.1±8.5 MeV2881.8 \pm 3.1 \pm 8.5\,\mathrm{MeV} and 12.4±5.3±5.8 MeV12.4 \pm 5.3 \pm 5.8 \,\mathrm{MeV}, respectively. In addition, evidence of a new decay mode Ξc(2790)0→Λc+K−\Xi_{c}(2790)^{0} \to \Lambda_{c}^{+} K^{-} is found with a significance of 3.7 σ3.7\,\sigma. The relative branching fraction of B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} with respect to the B−→D+D−K−B^{-} \to D^{+} D^{-} K^{-} decay is measured to be 2.36±0.11±0.22±0.252.36 \pm 0.11 \pm 0.22 \pm 0.25, where the first uncertainty is statistical, the second systematic and the third originates from the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb public pages

    Measurement of the ratios of branching fractions R(D∗)\mathcal{R}(D^{*}) and R(D0)\mathcal{R}(D^{0})

    Full text link
    The ratios of branching fractions R(D∗)≡B(Bˉ→D∗τ−Μˉτ)/B(Bˉ→D∗Ό−ΜˉΌ)\mathcal{R}(D^{*})\equiv\mathcal{B}(\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}\to D^{*}\mu^{-}\bar{\nu}_{\mu}) and R(D0)≡B(B−→D0τ−Μˉτ)/B(B−→D0Ό−ΜˉΌ)\mathcal{R}(D^{0})\equiv\mathcal{B}(B^{-}\to D^{0}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(B^{-}\to D^{0}\mu^{-}\bar{\nu}_{\mu}) are measured, assuming isospin symmetry, using a sample of proton-proton collision data corresponding to 3.0 fb−1{ }^{-1} of integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The tau lepton is identified in the decay mode τ−→Ό−ΜτΜˉΌ\tau^{-}\to\mu^{-}\nu_{\tau}\bar{\nu}_{\mu}. The measured values are R(D∗)=0.281±0.018±0.024\mathcal{R}(D^{*})=0.281\pm0.018\pm0.024 and R(D0)=0.441±0.060±0.066\mathcal{R}(D^{0})=0.441\pm0.060\pm0.066, where the first uncertainty is statistical and the second is systematic. The correlation between these measurements is ρ=−0.43\rho=-0.43. Results are consistent with the current average of these quantities and are at a combined 1.9 standard deviations from the predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb public pages

    INDIGO-DataCloud: A data and computing platform to facilitate seamless access to e-infrastructures

    Get PDF
    This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the demanding requirements of scientific communities, either locally or through enhanced interfaces. The middleware developed allows to federate hybrid resources, to easily write, port and run scientific applications to the cloud. In particular, we have extended existing PaaS (Platform as a Service) solutions, allowing public and private e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to integrate their existing services and make them available through AAI services compliant with GEANT interfederation policies, thus guaranteeing transparency and trust in the provisioning of such services. Our middleware facilitates the execution of applications using containers on Cloud and Grid based infrastructures, as well as on HPC clusters. Our developments are freely downloadable as open source components, and are already being integrated into many scientific applications

    A game-theoretical analysis of Grid job scheduling

    No full text
    Computational Grid is a well-established platform that gives an assurance to provide a vast range of heterogeneous resources for high performance computing. To grasp the full advantage of Grid systems, efficient and effective resource management and Grid job scheduling are key requirements. Particularly in resourcemanagement and job scheduling, conflictsmay arise as Grid resources are usually owned by different organizations/sites,which have different and often contradictory goals. For instance some site prefers first to execute its own local jobs over the Grid jobs, in order to minimize the completion time of the former jobs. Nevertheless, for a Grid to properly work, sites should be incentivated to collaborate for the faster execution of Grid jobs. Each site who accepts to execute the job, gets an incentive amounting to the length of the job. A crucial objective is to analyze potential scenarios where selfish or cooperative behaviors of organizations impact heavily on global Grid efficiency. In this thesis, we study the job scheduling problem in computational Grid and analyze it using game theoretic approaches. We consider a hierarchical job scheduling model that is formulated as a sequential non-cooperative job scheduling game among Grid sites, which may have selfish concerns. We exploit the concept of Nash equilibrium, a situation in which no player can gain any profit by unilaterally changing its strategy. In the general case in which there are local jobs besides Grid jobs we conjecture that the selfish strategy profile, in which every site chooses to execute a local job if there is any in its queue and otherwise bids for executing a Grid job offering its earliest estimated response time, in the presence of heavy load is a Nash equilibrium. Earliest estimated response time is an estimation of the time interval between the job submission and the beginning of the job execution. Next, we restrict to a special sub-case in which there are no local jobs. We give a formal proof that the strategy profile where every site bids its earliest estimated response time upon arrival of a new Grid job is a Nash equilibrium. In both cases we make two main assumptions over our model. First, we require that every incoming job (either Grid or local) needs the same amount of time to be executed. Second, we assume that the Grid is heavily loaded, namely there must be enough incoming jobs so that every site that is free and willing to execute a job cannot be inactive. We also investigate the above two strategies after relaxing the heavy load condition. For the general case, in absence of heavy load condition, interestingly we have spotted a counter-example. It is counterintuitive to the fact that the selfish strategy profile where a player prefers to execute a local job over the Grid job is a Nash. It is expected that executing a local job at a given site s will have a consequence that the other sites will have to accept the execution of Grid job. Which in turn will decrease the probability of the other sites of accepting future Grid jobs, with consequent advantage for site s. But interestingly, we found a counter-example for this fact. We correlated and complemented our theoretical results by performing an experimental analysis based on two kinds of approaches, simulation and exhaustive search

    Cbpr 2.0 - A Community-Based Participatory Research Approach To Program Scaling In Maternal Mental Health: Responding To Community Risk Factors For Maternal Depression While Replicating New Haven Moms To Bridgeport And New York City

    No full text
    “Community-Based Participatory Research 2.0” refers to a methodologic framework describing the process of scaling the New Haven MOMS Partnership, an existing participatory intervention, to new sites in Bridgeport, Connecticut and New York City. Following a community-based participatory approach to the process of program differentiation within implementation science, informal team canvassing identified core components of the program: neighborhood hub sites, community mental health workers, and utilization of incentives. Community needs assessments were prepared, distributed, and analyzed to identify flexible program components. Assessments were tailored to specific communities and included the Center for Epidemiologic Studies-Depression Scale (CES-D), the Oslo-3 perceived social support scale, the Chicago Neighborhood Social Cohesion Questions (CNQ), and multiple demographic and site-specific questions. For respondents in both Bridgeport (n = 135) and New York City (n = 173), the burden of depression amongst parenting women was significant. About 38.1% of respondents in Bridgeport and 57.1% in New York City were at risk for major depression and overall symptoms were more severe in New York. In both sites, women reported an unmet desire for mental healthcare as well as specific barriers to accessing traditional treatment and may thus benefit from the core components of the MOMS Partnership intervention. The first flexible program component included the content and distribution of the goals and needs assessment itself. Goals and needs assessment data also identified what type of incentives should be utilized in the program given reported basic needs in the community that were significantly associated with the burden of maternal depression, which included diaper need and food insecurity, indicating that the Diaper Bank may be an essential partnership and that diapers and food stamps may be useful incentives for participants. Program timings and languages were also identified in each needs assessment and will be addressed in ultimate program design. The methodology of CBPR 2.0 thus asserts that an appropriate community-partnered and evidence-based approach to scaling interventions involves program differentiation, which should include identifying core components as well as flexible components, which is where community responsiveness should be seen as an essential component of replicating a program with fidelity
    • 

    corecore