33 research outputs found

    Grid site testing for ATLAS with Hammer Cloud

    Get PDF
    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework

    Effects of alirocumab on types of myocardial infarction: insights from the ODYSSEY OUTCOMES trial

    Get PDF
    Aims  The third Universal Definition of Myocardial Infarction (MI) Task Force classified MIs into five types: Type 1, spontaneous; Type 2, related to oxygen supply/demand imbalance; Type 3, fatal without ascertainment of cardiac biomarkers; Type 4, related to percutaneous coronary intervention; and Type 5, related to coronary artery bypass surgery. Low-density lipoprotein cholesterol (LDL-C) reduction with statins and proprotein convertase subtilisin–kexin Type 9 (PCSK9) inhibitors reduces risk of MI, but less is known about effects on types of MI. ODYSSEY OUTCOMES compared the PCSK9 inhibitor alirocumab with placebo in 18 924 patients with recent acute coronary syndrome (ACS) and elevated LDL-C (≥1.8 mmol/L) despite intensive statin therapy. In a pre-specified analysis, we assessed the effects of alirocumab on types of MI. Methods and results  Median follow-up was 2.8 years. Myocardial infarction types were prospectively adjudicated and classified. Of 1860 total MIs, 1223 (65.8%) were adjudicated as Type 1, 386 (20.8%) as Type 2, and 244 (13.1%) as Type 4. Few events were Type 3 (n = 2) or Type 5 (n = 5). Alirocumab reduced first MIs [hazard ratio (HR) 0.85, 95% confidence interval (CI) 0.77–0.95; P = 0.003], with reductions in both Type 1 (HR 0.87, 95% CI 0.77–0.99; P = 0.032) and Type 2 (0.77, 0.61–0.97; P = 0.025), but not Type 4 MI. Conclusion  After ACS, alirocumab added to intensive statin therapy favourably impacted on Type 1 and 2 MIs. The data indicate for the first time that a lipid-lowering therapy can attenuate the risk of Type 2 MI. Low-density lipoprotein cholesterol reduction below levels achievable with statins is an effective preventive strategy for both MI types.For complete list of authors see http://dx.doi.org/10.1093/eurheartj/ehz299</p

    Effect of alirocumab on mortality after acute coronary syndromes. An analysis of the ODYSSEY OUTCOMES randomized clinical trial

    Get PDF
    Background: Previous trials of PCSK9 (proprotein convertase subtilisin-kexin type 9) inhibitors demonstrated reductions in major adverse cardiovascular events, but not death. We assessed the effects of alirocumab on death after index acute coronary syndrome. Methods: ODYSSEY OUTCOMES (Evaluation of Cardiovascular Outcomes After an Acute Coronary Syndrome During Treatment With Alirocumab) was a double-blind, randomized comparison of alirocumab or placebo in 18 924 patients who had an ACS 1 to 12 months previously and elevated atherogenic lipoproteins despite intensive statin therapy. Alirocumab dose was blindly titrated to target achieved low-density lipoprotein cholesterol (LDL-C) between 25 and 50 mg/dL. We examined the effects of treatment on all-cause death and its components, cardiovascular and noncardiovascular death, with log-rank testing. Joint semiparametric models tested associations between nonfatal cardiovascular events and cardiovascular or noncardiovascular death. Results: Median follow-up was 2.8 years. Death occurred in 334 (3.5%) and 392 (4.1%) patients, respectively, in the alirocumab and placebo groups (hazard ratio [HR], 0.85; 95% CI, 0.73 to 0.98; P=0.03, nominal P value). This resulted from nonsignificantly fewer cardiovascular (240 [2.5%] vs 271 [2.9%]; HR, 0.88; 95% CI, 0.74 to 1.05; P=0.15) and noncardiovascular (94 [1.0%] vs 121 [1.3%]; HR, 0.77; 95% CI, 0.59 to 1.01; P=0.06) deaths with alirocumab. In a prespecified analysis of 8242 patients eligible for ≥3 years follow-up, alirocumab reduced death (HR, 0.78; 95% CI, 0.65 to 0.94; P=0.01). Patients with nonfatal cardiovascular events were at increased risk for cardiovascular and noncardiovascular deaths (P<0.0001 for the associations). Alirocumab reduced total nonfatal cardiovascular events (P<0.001) and thereby may have attenuated the number of cardiovascular and noncardiovascular deaths. A post hoc analysis found that, compared to patients with lower LDL-C, patients with baseline LDL-C ≥100 mg/dL (2.59 mmol/L) had a greater absolute risk of death and a larger mortality benefit from alirocumab (HR, 0.71; 95% CI, 0.56 to 0.90; Pinteraction=0.007). In the alirocumab group, all-cause death declined wit h achieved LDL-C at 4 months of treatment, to a level of approximately 30 mg/dL (adjusted P=0.017 for linear trend). Conclusions: Alirocumab added to intensive statin therapy has the potential to reduce death after acute coronary syndrome, particularly if treatment is maintained for ≥3 years, if baseline LDL-C is ≥100 mg/dL, or if achieved LDL-C is low. Clinical Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT01663402

    Improving ATLAS grid site reliability with functional tests using HammerCloud

    No full text
    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites

    Grid Site Testing for ATLAS with HammerCloud

    No full text
    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework

    Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    No full text
    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments' grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO)

    Next Generation PanDA Pilot for ATLAS and Other Experiments

    No full text
    The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins such that a new PanDA Pilot user only has to implement a set of prototyped methods in the plug-in classes, and provide a script that configures and runs the experiment specific payload. We will give an overview of the Next Generation PanDA Pilot system and will present major features and recent improvements including live user payload debugging, data access via the Federated XRootD system, stage-out to alternative storage elements, support for the new ATLAS DDM system (Rucio), and an improved integration with glExec, as well as a description of the experiment specific plug-in classes. The performance of the pilot system in processing LHC data on the OSG, LCG and Nordugrid infrastructures used by ATLAS will also be presented. We will describe plans for future development on the time scale of the next few years

    ATLAS Cloud Computing R&D project

    No full text
    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology.\nThis contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS’ learning during the collaboration with leading commercial and academic cloud providers

    ATLAS Cloud R&D

    No full text
    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS’ learning during the collaboration with leading commercial and academic cloud providers
    corecore