7,165 research outputs found

    Abnormally high content of free glucosamine residues identified in a preparation of commercially available porcine intestinal heparan sulfate

    Get PDF
    Heparan sulfate (HS) polysaccharides are ubiquitous in animal tissues as components of proteoglycans, and they participate in many important biological processes. HS carbohydrate chains are complex and can contain rare structural components such as N-unsubstituted glucosamine (GlcN). Commercially available HS preparations have been invaluable in many types of research activities. In the course of preparing microarrays to include probes derived from HS oligosaccharides, we found an unusually high content of GlcN residue in a recently purchased batch of porcine intestinal mucosal HS. Composition and sequence analysis by mass spectrometry of the oligosaccharides obtained after heparin lyase III digestion of the polysaccharide indicated two and three GlcN in the tetrasaccharide and hexasaccharide fractions, respectively. (1)H NMR of the intact polysaccharide showed that this unusual batch differed strikingly from other HS preparations obtained from bovine kidney and porcine intestine. The very high content of GlcN (30%) and low content of GlcNAc (4.2%) determined by disaccharide composition analysis indicated that N-deacetylation and/or N-desulfation may have taken place. HS is widely used by the scientific community to investigate HS structures and activities. Great care has to be taken in drawing conclusions from investigations of structural features of HS and specificities of HS interaction with proteins when commercial HS is used without further analysis. Pending the availability of a validated commercial HS reference preparation, our data may be useful to members of the scientific community who have used the present preparation in their studies

    The alpha-dependence of transition frequencies for some ions of Ti, Mn, Na, C, and O, and the search for variation of the fine structure constant

    Full text link
    We use the relativistic Hartree-Fock method, many-body perturbation theory and configuration-interaction method to calculate the dependence of atomic transition frequencies on the fine structure constant, alpha. The results of these calculations will be used in the search for variation of the fine structure constant in quasar absorption spectra.Comment: 4 pages, 5 table

    An Upper Limit on Omega_matter Using Lensed Arcs

    Full text link
    We use current observations on the number statistics of gravitationally lensed optical arcs towards galaxy clusters to derive an upper limit on the cosmological mass density of the Universe. The gravitational lensing statistics due to foreground clusters combine properties of both cluster evolution, which is sensitive to the matter density, and volume change, which is sensitive to the cosmological constant. The uncertainties associated with the predicted number of lensing events, however, currently do not allow one to distinguish between flat and open cosmological models with and without a cosmological constant. Still, after accounting for known errors, and assuming that clusters in general have dark matter core radii of the order ~ 35 h^-1 kpc, we find that the cosmological mass density, Omega_m, is less than 0.56 at the 95% confidence. Such a dark matter core radius is consistent with cluster potentials determined recently by detailed numerical inversions of strong and weak lensing imaging data. If no core radius is present, the upper limit on Omega_m increases to 0.62 (95% confidence level). The estimated upper limit on Omega_m is consistent with various cosmological probes that suggest a low matter density for the Universe.Comment: 6 pages, 3 figures. Accepted version (ApJ in press

    Physics Analysis Expert PAX: First Applications

    Full text link
    PAX (Physics Analysis Expert) is a novel, C++ based toolkit designed to assist teams in particle physics data analysis issues. The core of PAX are event interpretation containers, holding relevant information about and possible interpretations of a physics event. Providing this new level of abstraction beyond the results of the detector reconstruction programs, PAX facilitates the buildup and use of modern analysis factories. Class structure and user command syntax of PAX are set up to support expert teams as well as newcomers in preparing for the challenges expected to arise in the data analysis at future hadron colliders.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX, 10 eps figures. PSN THLT00

    Dynamic extensions of batch systems with cloud resources

    Get PDF
    Compute clusters use Portable Batch Systems (PBS) to distribute workload among individual cluster machines. To extend standard batch systems to Cloud infrastructures, a new service monitors the number of queued jobs and keeps track of the price of available resources. This meta-scheduler dynamically adapts the number of Cloud worker nodes according to the requirement profile. Two different worker node topologies are presented and tested on the Amazon EC2 Cloud service

    Challenges of the LHC Computing Grid by the CMS experiment

    Get PDF
    This document summarises the status of the existing grid infrastructure and functionality for the high-energy physics experiment CMS and the expertise in operation attained during the so-called ”Computing, Software and Analysis Challenge” performed in 2006 (CSA06). This report is especially focused on the role of the participating computing centres in Germany located at Karlsruhe, Hamburg and Aachen

    High performance data analysis via coordinated caches

    Get PDF
    With the second run period of the LHC, high energy physics collaborations will have to face increasing computing infrastructural needs. Opportunistic resources are expected to absorb many computationally expensive tasks, such as Monte Carlo event simulation. This leaves dedicated HEP infrastructure with an increased load of analysis tasks that in turn will need to process an increased volume of data. In addition to storage capacities, a key factor for future computing infrastructure is therefore input bandwidth available per core. Modern data analysis infrastructure relies on one of two paradigms: data is kept on dedicated storage and accessed via network or distributed over all compute nodes and accessed locally. Dedicated storage allows data volume to grow independently of processing capacities, whereas local access allows processing capacities to scale linearly. However, with the growing data volume and processing requirements, HEP will require both of these features. For enabling adequate user analyses in the future, the KIT CMS group is merging both paradigms: popular data is spread over a local disk layer on compute nodes, while any data is available from an arbitrarily sized background storage. This concept is implemented as a pool of distributed caches, which are loosely coordinated by a central service. A Tier 3 prototype cluster is currently being set up for performant user analyses of both local and remote data
    • …
    corecore