33 research outputs found
Recommended from our members
Next generation farms at Fermilab
The current generation of UNIX farms at Fermilab are rapidly approaching the end of their useful life. The workstations were purchased during the years 1991-1992 and represented the most cost-effective computing available at that time. Acquisition of new workstations is being made to upgrade the UNIX farms for the purpose of providing large amounts of computing for reconstruction of data being collected at the 1996-1997 fixed-target run, as well as to provide simulation computing for CMS, the Auger project, accelerator calculations and other projects that require massive amounts of CPU. 4 refs., 1 fig., 2 tabs
Recommended from our members
FermiGrid - experience and future plans
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems
Recommended from our members
FermiGrid
As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities
High-sensitivity troponin I as a predictor of left ventricular dysfunction in the use of cardiotoxic anticancer agents for breast cancer in patients with predominantly low and moderate risk of cardiotoxicity
Aim. To study the significance of monitoring high-sensitivity troponin I (hs-cTnI) for predicting anthracycline-induced left ventricular (LV) dysfunction in the treatment of breast cancer in patients with moderate and low risk of cardiotoxicity (CT).Material and methods. The study involved 49 patients with breast cancer aged 50±10 years who underwent neoadjuvant or adjuvant chemotherapy, which included doxorubicin at a course dose of 60 mg/m2 and an average cumulative dose of 251±60 mg/m2. The level of hs-cTnI was determined by an ultrasensitive method before the start of chemotherapy, after each course of anthracyclines and in 18 patients before the administration of anthracyclines. The level of hscTnI >0,017 ng/ml was considered elevated. Echocardiography was performed before the start of chemotherapy, after the end of anthracycline therapy, and every 3 months for 12 months thereafter. CT was defined as a decrease in LV ejection fraction (EF) by ≥10% to <53%.Results. CT risk before chemotherapy was considered low and moderate in 96% of patients. An increase in hs-сTnI was detected ≥1 times in 56,8% of patients: before chemotherapy — in 13,5%, after 1 and 2 courses of anthracycline therapy — in 13,9%, after 3, 4, 5 and 6 courses — in 44%, 62%, 71% and 66% of patients, respectively. The levels of hs-cTnI before and after administration of anthracyclines did not differ significantly. The development of LV dysfunction was observed in 16,3% of patients. There were following prognostic significance of an increase in hs-cTnI at any time of chemotherapy for a decrease in LV EF: sensitivity — 87,5%, specificity — 50%, the positive predictive value — 28%, the negative predictive value — 94,7%. The closest relationship was noted between CT and hs-cTnI value before the start of chemotherapy (β=0,45, p=0,005) and after the 3rd course of anthracycline therapy (β=0,56, p=0,002).Conclusion. An increase in hs-cTnI level before and during anthracycline thera py in patients with a low risk of cardiotoxicity has a prognostic value in relation to the development of left ventricular dysfunction. Hs-cTnI assessment should be performed before the start of therapy, and then starting from the 3rd course of anthracycline therapy in all patients, regardless of the risk of cardiotoxicity
Group Report 4: Iron dynamics in terrestrial ecosystems in the Amur River basin
Our focus is to understand the spatial and temporal patterns and processes of biogeochemical iron transport from terrestrial ecosystem to river with special attention to the human impacts such as forest fire, land-use change and agricultural activities. Field monitoring of iron dynamics including stream water, soil water, and groundwater has been conducted from the upstream to downstream basin of Amur river in northeastern China and far-eastern Russia under tight collaboration with Chinese and Russian scientists and institutes. We found that the major source of dissolved iron from terrestrial system to river was mainly natural wetland which largely covered near the middle to lower region of Amur river basin. In upstream forested basin, dissolved iron in soil was mainly transported with dissolved organic carbon (DOC) rather than the forms of Fe(II) and Fe(III). The swamp forest and riparian zone near the stream channel was the important source area of iron due to the wet and anaerobic conditions which increase the DOC concentration and dissolved iron in soil and groundwater. The forest fire, one of the major human disturbances in the upstream mountain, changes the quantity and qualities of the organic matter in the surface soil, resulting to the decrease of the iron transport from the burned forest to the stream. The downstream areas with gentle topography are largely covered by natural wetland especially surrounding the middle and lower part of Amur river. The spatial distribution of iron concentration in stream water indicated that the stream water at the lower elevation and the gentle watershed contained much higher iron concentrations than the upper and steep basin. DOC was the important carrier of dissolved iron in soil water and stream water in these lower wetland as well as in the upstream region. The land-use change from wetland to farmland (paddy field and cropland) caused significant changes in soil chemistry, redox potential (Eh) and soil water quality. The drainage during the crop production system increase the Eh (indicating changes from anaerobic to aerobic condition in soil), resulting the decrease of dissolved iron in soil water due to the land-use change. The development of the irrigation system has significantly decreased the groundwater table over the last several decades, possibly contributing to the decrease in iron concentration in river water in the Sanjiang plain. The irrigation of groundwater with high dissolved iron resulted in the accumulation of amorphous iron oxides in the surface soil of the paddy filed, which was retained and not mobile from the soil. Our results indicated that the natural anaerobic environment in wetland is important for iron mobilization from terrestrial system to Amur river and that the human impact such as forest fire and land reclamation tended to make these irons immobile through mainly oxidation in the ground surface
Measurement of the mass difference m(D-s(+))-m(D+) at CDF II
We present a measurement of the mass difference m(D-s(+))-m(D+), where both the D-s(+) and D+ are reconstructed in the phipi(+) decay channel. This measurement uses 11.6 pb(-1) of data collected by CDF II using the new displaced-track trigger. The mass difference is found to be m(D-s(+))-m(D+)=99.41+/-0.38(stat)+/-0.21(syst) MeV/c(2)
Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements
High Throughput WAN Data Transfer with Hadoop-based Storage
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer
Recommended from our members
VOMS/VOMRS utilization patterns and convergence plan
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed at Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution