340 research outputs found
A Collimation Experiment with Protons at 120 GeV
We present the preliminary results of a two-stage collimation experiment made with a 120 GeV coasting proton beam in the SPS at CERN
Considerations on bubble fragmentation models
n this paper we describe the restrictions that the probability density function (p.d.f.) of the size of particles resulting from the rupture of a drop or bubble must satisfy. Using conservation of volume, we show that when a particle of diameter, D0, breaks into exactly two fragments of sizes D and D2 = (D30−D3)1/3 respectively, the resulting p.d.f., f(D; D0), must satisfy a symmetry relation given by D22 f(D; D0) = D2 f(D2; D0), which does not depend on the nature of the underlying fragmentation process. In general, for an arbitrary number of resulting particles, m(D0), we determine that the daughter p.d.f. should satisfy the conservation of volume condition given by m(D0) ∫0D0 (D/D0)3 f(D; D0) dD = 1. A detailed analysis of some contemporary fragmentation models shows that they may not exhibit the required conservation of volume condition if they are not adequately formulated. Furthermore, we also analyse several models proposed in the literature for the breakup frequency of drops or bubbles based on different principles, g(ϵ, D0). Although, most of the models are formulated in terms of the particle size D0 and the dissipation rate of turbulent kinetic energy, ϵ, and apparently provide different results, we show here that they are nearly identical when expressed in dimensionless form in terms of the Weber number, g*(Wet) = g(ϵ, D0) D2/30 ϵ−1/3, with Wet ~ ρ ϵ2/3 D05/3/σ, where ρ is the density of the continuous phase and σ the surface tension
Cascade Simulations for the LHC Betatron Cleaning Insertion
A cascade calculation is done in the IR7 betatron cleaning insertion of LHC. It uses a detailed map of the primary losses and an accurate model of the straight section. One aim is to design a compact shielding which fits in the tight section of the tunnel. The same study allows to define radiation hardness properties of the equipment to be installed in the section and to locate areas of low activi ty for the installation of sensitive equipment
Fast Ramping Superconducting Magnet Design Issues for Future Injector Upgrades at CERN
An upgrade of the LHC injection chain, and especially the sequence of PS and SPS, up to an extraction energy of 1Â TeV, is one of the steps considered to improve the performance of the whole accelerator complex. The magnets for this upgrade require central magnetic field from 2 T (for a PS upgrade) to 4.5 T (for an SPS upgrade), for which superconducting magnets are a candidate. Due to the fast field sweep rate of the magnets (from about 1.5 T/s to 2.5Â T/s), internal heating from eddy and persistent current effects (AC loss) must be minimized. In this paper we discuss a rationale for the design and optimization of fast ramped superconducting accelerator magnets, specifically aimed at the LHC injectors. We introduce a design parameter, the product of bore field and field ramp-rate, providing a measure of the magnet performance, and we apply it to choose the design range for a technology demonstration magnet. We finally discuss the dependence of key design parameters on the bore field and the bore diameter, to provide an approximate scaling and guidelines for critical R&D
Dense CTD survey versus glider fleet sampling: comparing data assimilation performance in a regional ocean model west of Sardinia
The REP14-MED sea trial carried out off the west coast of
Sardinia in June 2014 provided a rich set of observations from both
ship-based conductivity–temperature–depth (CTD) probes and a fleet of underwater gliders. We present the results of
several simulations assimilating data either from CTDs or from different
subsets of glider data, including up to eight vehicles, in addition to
satellite sea level anomalies, surface temperature and Argo profiles. The
Western Mediterranean OPerational forcasting system
(WMOP) regional ocean model is used with a local multi-model ensemble optimal
interpolation scheme to recursively ingest both lower-resolution large-scale
and dense local observations over the whole sea trial duration. Results show
the capacity of the system to ingest both types of data, leading to
improvements in the representation of all assimilated variables. These
improvements persist during the 3-day periods separating two analyses. At
the same time, the system presents some limitations in properly representing
the smaller-scale structures, which are smoothed out by the model error
covariances provided by the ensemble. An evaluation of the forecasts using
independent measurements from shipborne CTDs and a towed ScanFish deployed at
the end of the sea trial shows that the simulations assimilating initial CTD
data reduce the error by 39 % on average with respect to the simulation
without data assimilation. In the glider-data-assimilative experiments, the
forecast error is reduced as the number of vehicles increases. The simulation
assimilating CTDs outperforms the simulations assimilating data from one to
four gliders. A fleet of eight gliders provides similar performance to the
10 km spaced CTD initialization survey in these experiments, with an overall
40 % model error reduction capacity with respect to the simulation without
data assimilation when comparing against independent campaign observations.</p
- …