1,781 research outputs found

    Extending acoustic in‐line pipe rheometry and friction factor modelling to low‐Reynolds‐number, non‐Newtonian slurries

    No full text
    The rheology of non‐Newtonian slurries are measured in a recirculating pipe loop using an acoustic velocimetry‐pressure drop technique at very low flow rates and variable solids loadings. The technique avoids (a) settling at low solids concentration, a shortcoming of bench rheometry, by using a vertical test section, and (b) physical sampling, providing greater safety. Speed of sound in the suspensions is also modelled. In‐line and off‐line data are used to assess the suitability of several non‐Newtonian models to describe observed flow behaviour. Measured and predicted values of the friction factor are compared, with the Madlener et al. (2009) Herschel‐Bulkley Extended model found to be superior. The dependence of yield stress and viscosity on solids loading and particle size is investigated, showing complexities from aggregation on the particle size distribution require more interpretation than the choice of rheological or friction‐factor model

    Derivation- bounded groups

    Get PDF
    For some problems which are defined by combinatorial properties good complexity bounds cannot be found because the combinatorial point of view restricts the set of solution algorithms. In this paper we present a phenomenon of this type with the classical word problem for finitely presented groups. A presentation of a group is called En-derivation-bounded (En-d.b.), if a function kϔEn exists which bounds the derivations of the words defining the unit element. For En-d.b. presentations a pure combinatorial En-algorithm for solving the word problem exists. It is proved that the property of being En-d.b. is an invariant of finite presentations, but that the degree of complexity of the pure combinatorial algorithm may be as far as posible from the degree of complexity of the word problem itself

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns
    • 

    corecore