480 research outputs found
Feasibility study and performance evaluation of a GPFS based storage area accessible simultaneously through ownCloud and StoRM
In this work, we demonstrate the feasibility of a GPFS based storage area accessible through both ownCloud and StoRM, as solution which allows VO's with storage manager service to access directly the files through personal computer. Furthermore, in order to study its performance, in load situation, we set up two load balanced ownCloud web servers and a front-end, back-end and gridFTP StoRM server. A Python script was also developed to simulate user performing several file transfers both with ownCloud and StoRM. We employed the average file transfer time and efficiency as a figure of merit, which was measured as a function of the number of parallel running scripts. We observed that the average time increases almost linearly with the number of parallel running scripts, independently of the used software
Improved Cloud resource allocation: how INDIGO-Datacloud is overcoming the current limitations in Cloud schedulers
Trabajo presentado a: 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP2016) 10â14 October 2016, San Francisco.Performing efficient resource provisioning is a fundamental aspect for any resource
provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are
under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.The authors want to acknowledge the support of the INDIGO-DataCloud (grant number 653549) project, funded by the European Commissionâs Horizon 2020 Framework Programme.Peer Reviewe
Enteral Multiple Micronutrient Supplementation in Preterm and Low Birth Weight Infants: A Systematic Review and Meta-analysis
OBJECTIVES
To assess effects of supplementation with 3 or more micronutrients (multiple micronutrients; MMN) compared to no MMN in human milk-fed preterm and low birth weight (LBW) infants.
RESULTS
Data on a subgroup of 414 preterm or LBW infants from 2 randomized controlled trials (4 reports) were included. The certainty of evidence ranged from low to very low. For growth outcomes in the MMN compared to the non-MMN group, there was a small increase in weight-for-age (2 trials, 383 participants) and height-for-age z-scores (2 trials, 372 participants); a small decrease in wasting (2 trials, 398 participants); small increases in stunting (2 trials, 399 participants); and an increase in underweight (2 trials, 396 participants). For neurodevelopment outcomes at 78 weeks, we found small increases in Bayley Scales of Infant Development, Version III (BISD-III), scores (cognition, receptive language, expressive language, fine motor, gross motor) in the MMN compared to the non-MMN group (1 trial, 27 participants). There were no studies examining dose or timing of supplementation.
CONCLUSIONS
Evidence is insufficient to determine whether enteral MMN supplementation to preterm or LBW infants who are fed mother's own milk is associated with benefit or harm. More trials are needed to generate evidence on mortality, morbidity, growth, and neurodevelopment.publishedVersio
Multidifferential study of identified charged hadron distributions in -tagged jets in proton-proton collisions at 13 TeV
Jet fragmentation functions are measured for the first time in proton-proton
collisions for charged pions, kaons, and protons within jets recoiling against
a boson. The charged-hadron distributions are studied longitudinally and
transversely to the jet direction for jets with transverse momentum 20 GeV and in the pseudorapidity range . The
data sample was collected with the LHCb experiment at a center-of-mass energy
of 13 TeV, corresponding to an integrated luminosity of 1.64 fb. Triple
differential distributions as a function of the hadron longitudinal momentum
fraction, hadron transverse momentum, and jet transverse momentum are also
measured for the first time. This helps constrain transverse-momentum-dependent
fragmentation functions. Differences in the shapes and magnitudes of the
measured distributions for the different hadron species provide insights into
the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any
supplementary material and additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb
public pages
Study of the decay
The decay is studied
in proton-proton collisions at a center-of-mass energy of TeV
using data corresponding to an integrated luminosity of 5
collected by the LHCb experiment. In the system, the
state observed at the BaBar and Belle experiments is
resolved into two narrower states, and ,
whose masses and widths are measured to be where the first uncertainties are statistical and the second
systematic. The results are consistent with a previous LHCb measurement using a
prompt sample. Evidence of a new
state is found with a local significance of , whose mass and width
are measured to be and , respectively. In addition, evidence of a new decay mode
is found with a significance of
. The relative branching fraction of with respect to the
decay is measured to be , where the first
uncertainty is statistical, the second systematic and the third originates from
the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb
public pages
Measurement of the ratios of branching fractions and
The ratios of branching fractions
and are measured, assuming isospin symmetry, using a
sample of proton-proton collision data corresponding to 3.0 fb of
integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The
tau lepton is identified in the decay mode
. The measured values are
and
, where the first uncertainty is
statistical and the second is systematic. The correlation between these
measurements is . Results are consistent with the current average
of these quantities and are at a combined 1.9 standard deviations from the
predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb
public pages
INDIGO-DataCloud: A data and computing platform to facilitate seamless access to e-infrastructures
This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the demanding requirements of scientific communities, either locally or through enhanced interfaces. The middleware developed allows to federate hybrid resources, to easily write, port and run scientific applications to the cloud. In particular, we have extended existing PaaS (Platform as a Service) solutions, allowing public and private e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to integrate their existing services and make them available through AAI services compliant with GEANT interfederation policies, thus guaranteeing transparency and trust in the provisioning of such services. Our middleware facilitates the execution of applications using containers on Cloud and Grid based infrastructures, as well as on HPC clusters. Our developments are freely downloadable as open source components, and are already being integrated into many scientific applications
A game-theoretical analysis of Grid job scheduling
Computational Grid is a well-established platform that
gives an assurance to provide a vast range of heterogeneous
resources for high performance computing. To grasp the full advantage of Grid systems, efficient and effective resource management and Grid job scheduling are key requirements. Particularly in resourcemanagement and job scheduling, conflictsmay arise as Grid resources are usually owned by different organizations/sites,which have different and often contradictory goals. For instance some site prefers first to execute its own local jobs over the Grid jobs, in order to minimize the completion time of the former jobs. Nevertheless, for a Grid to properly work, sites should be incentivated to collaborate for the faster execution of Grid jobs. Each site who accepts to execute the job, gets an incentive amounting to the length of the job. A crucial objective is to analyze potential scenarios where selfish or
cooperative behaviors of organizations impact heavily
on global Grid efficiency. In this thesis, we study the
job scheduling problem in computational Grid and analyze
it using game theoretic approaches. We consider a hierarchical job scheduling model that is formulated as a sequential non-cooperative job scheduling game among Grid sites, which may have selfish concerns. We exploit the concept of Nash equilibrium, a situation in which no player can gain any profit by unilaterally changing its strategy.
In the general case in which there are local jobs besides Grid jobs we conjecture that the selfish strategy profile,
in which every site chooses to execute a local job if there is any in its queue and otherwise bids for executing a Grid job offering its earliest estimated response time, in the presence of heavy load is a Nash equilibrium. Earliest estimated response time is an estimation of the time interval between the job submission and the beginning of the job execution. Next, we restrict to a special sub-case in which there are no local jobs. We give a formal proof that the strategy profile where every site bids its earliest estimated response time upon arrival of a new Grid job is a Nash equilibrium. In both cases we make two main assumptions over our model. First, we require that every incoming job (either Grid or local) needs the same amount of time to be executed. Second, we assume that the Grid is heavily loaded, namely there must be enough incoming jobs so
that every site that is free and willing to execute a job
cannot be inactive. We also investigate the above two strategies after relaxing the heavy load condition. For
the general case, in absence of heavy load condition, interestingly we have spotted a counter-example. It is
counterintuitive to the fact that the selfish strategy profile where a player prefers to execute a local job over
the Grid job is a Nash. It is expected that executing a
local job at a given site s will have a consequence that
the other sites will have to accept the execution of Grid
job. Which in turn will decrease the probability of the
other sites of accepting future Grid jobs, with consequent
advantage for site s. But interestingly, we found a counter-example for this fact. We correlated and complemented our theoretical results by performing an experimental analysis based on two kinds of approaches, simulation and exhaustive search
Cbpr 2.0 - A Community-Based Participatory Research Approach To Program Scaling In Maternal Mental Health: Responding To Community Risk Factors For Maternal Depression While Replicating New Haven Moms To Bridgeport And New York City
âCommunity-Based Participatory Research 2.0â refers to a methodologic framework describing the process of scaling the New Haven MOMS Partnership, an existing participatory intervention, to new sites in Bridgeport, Connecticut and New York City.
Following a community-based participatory approach to the process of program differentiation within implementation science, informal team canvassing identified core components of the program: neighborhood hub sites, community mental health workers, and utilization of incentives. Community needs assessments were prepared, distributed, and analyzed to identify flexible program components. Assessments were tailored to specific communities and included the Center for Epidemiologic Studies-Depression Scale (CES-D), the Oslo-3 perceived social support scale, the Chicago Neighborhood Social Cohesion Questions (CNQ), and multiple demographic and site-specific questions.
For respondents in both Bridgeport (n = 135) and New York City (n = 173), the burden of depression amongst parenting women was significant. About 38.1% of respondents in Bridgeport and 57.1% in New York City were at risk for major depression and overall symptoms were more severe in New York. In both sites, women reported an unmet desire for mental healthcare as well as specific barriers to accessing traditional treatment and may thus benefit from the core components of the MOMS Partnership intervention. The first flexible program component included the content and distribution of the goals and needs assessment itself. Goals and needs assessment data also identified what type of incentives should be utilized in the program given reported basic needs in the community that were significantly associated with the burden of maternal depression, which included diaper need and food insecurity, indicating that the Diaper Bank may be an essential partnership and that diapers and food stamps may be useful incentives for participants. Program timings and languages were also identified in each needs assessment and will be addressed in ultimate program design.
The methodology of CBPR 2.0 thus asserts that an appropriate community-partnered and evidence-based approach to scaling interventions involves program differentiation, which should include identifying core components as well as flexible components, which is where community responsiveness should be seen as an essential component of replicating a program with fidelity
- âŠ