41 research outputs found
Upgrading the beam telescopes at the DESY II Test Beam Facility
The DESY II Test Beam Facility is a key infrastructure for modern high energy physics detector development, providing particles with a small momentum spread in a range from 1 to 6 GeV to user groups e.g. from the LHC experiments and Belle II as well as generic detector R&D. Beam telescopes are provided in all three test beam areas as precise tracking reference without time stamping, with triggered readout and a readout time of >115 s . If the highest available rates are used, multiple particles are traversing the telescopes within one readout frame, thus creating ambiguities that cannot be resolved without additional timing layers. Several upgrades are currently investigated and tested: Firstly, a fast monolithic pixel sensor, the TelePix, to provide precise track timing and triggering on a region of interest is proposed to overcome this limitation. The TelePix is a 180 nm HV-CMOS sensor that has been developed jointly by DESY, KIT and the University of Heidelberg and designed at KIT. In this publication, the performance evaluation is presented: The difference between two amplifier designs is evaluated. A high hit detection efficiency of above 99.9 % combined with a time resolution of below 4 ns at negligible pixel noise rates is determined. Finally, the digital hit output to provide region of interest triggering is evaluated and shows a short absolute delay with respect to a traditional trigger scintillator as well as an excellent time resolution. Secondly, a fast LGAD plane has been proposed to provide a time resolution of a few 10 ps, which is foreseen to drastically improve the timing performance of the telescope. Time resolutions of below 70 ps have been determined in collaboration with the University of California, Santa Barbara
Kinematic Effects in Radiative Quarkonia Decays
Non-relativistic QCD (NRQCD) predicts colour octet contributions to be
significant not only in many production processes of heavy quarkonia but also
in their radiative decays. We investigate the photon energy distributions in
these processes in the endpoint region. There the velocity expansion of NRQCD
breaks down which requires a resummation of an infinite class of colour octet
operators to so-called shape functions. We model these non-perturbative
functions by the emission of a soft gluon cluster in the initial state. We
found that the spectrum in the endpoint region is poorly understood if the
values for the colour octet matrix elements are taken as large as indicated
from NRQCD scaling rules. Therefore the endpoint region should not be taken
into account for a fit of the strong coupling constant at the scale of the
heavy quark mass.Comment: LaTeX, 17 pages, 5 figures. The complete paper is also available via
the www at http://www-ttp.physik.uni-karlsruhe.de/Preprints
Exploring the Partonic Structure of Hadrons through the Drell-Yan Process
The Drell-Yan process is a standard tool for probing the partonic structure
of hadrons. Since the process proceeds through a quark-antiquark annihilation,
Drell-Yan scattering possesses a unique ability to selectively probe sea
distributions. This review examines the application of Drell-Yan scattering to
elucidating the flavor asymmetry of the nucleon's sea and nuclear modifications
to the sea quark distributions in unpolarized scattering. Polarized beams and
targets add an exciting new dimension to Drell-Yan scattering. In particular,
the two initial-state hadrons give Drell-Yan sensitivity to chirally-odd
transversity distributions.Comment: 23 pages, 9 figures, to appear in J. Phys. G, resubmission corrects
typographical error
NLO Production and Decay of Quarkonium
We present a calculation of next-to-leading-order (NLO) QCD corrections to
total hadronic production cross sections and to light-hadron-decay rates of
heavy quarkonium states. Both colour-singlet and colour-octet contributions are
included. We discuss in detail the use of covariant projectors in dimensional
regularization, the structure of soft-gluon emission and the overall finiteness
of radiative corrections. We compare our approach with the
NLO version of the threshold-expansion technique recently introduced by
Braaten and Chen. Most of the results presented here are new. Others
represent the first independent reevaluation of calculations already known in
the literature. In this case a comparison with previous findings is reported.Comment: 65 pages, Latex, epsfig, 8 figures. Typos corrected and improvements
made to the text. Version to appear on Nucl Phys.
Swarm Learning for decentralized and confidential clinical machine learning
Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine1,2. Patients with leukaemia can be identified using machine learning on the basis of their blood transcriptomes3. However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation4,5. Here, to facilitate the integration of any medical data from any data owner worldwide without violating privacy laws, we introduce Swarm Learningâa decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning. To illustrate the feasibility of using Swarm Learning to develop disease classifiers using distributed data, we chose four use cases of heterogeneous diseases (COVID-19, tuberculosis, leukaemia and lung pathologies). With more than 16,400 blood transcriptomes derived from 127 clinical studies with non-uniform distributions of cases and controls and substantial study biases, as well as more than 95,000 chest X-ray images, we show that Swarm Learning classifiers outperform those developed at individual sites. In addition, Swarm Learning completely fulfils local confidentiality regulations by design. We believe that this approach will notably accelerate the introduction of precision medicine
The study of atmospheric ice-nucleating particles via microfluidically generated droplets
Ice-nucleating particles (INPs) play a significant role in the climate and hydrological cycle by triggering ice formation in supercooled clouds, thereby causing precipitation and affecting cloud lifetimes and their radiative properties. However, despite their importance, INP often comprise only 1 in 10Âłâ10ⶠambient particles, making it difficult to ascertain and predict their type, source, and concentration. The typical techniques for quantifying INP concentrations tend to be highly labour-intensive, suffer from poor time resolution, or are limited in sensitivity to low concentrations. Here, we present the application of microfluidic devices to the study of atmospheric INPs via the simple and rapid production of monodisperse droplets and their subsequent freezing on a cold stage. This device offers the potential for the testing of INP concentrations in aqueous samples with high sensitivity and high counting statistics. Various INPs were tested for validation of the platform, including mineral dust and biological species, with results compared to literature values. We also describe a methodology for sampling atmospheric aerosol in a manner that minimises sampling biases and which is compatible with the microfluidic device. We present results for INP concentrations in air sampled during two field campaigns: (1) from a rural location in the UK and (2) during the UKâs annual Bonfire Night festival. These initial results will provide a route for deployment of the microfluidic platform for the study and quantification of INPs in upcoming field campaigns around the globe, while providing a benchmark for future lab-on-a-chip-based INP studies
Determination of nutrient salts by automatic methods both in seawater and brackish water: the phosphate blank
9 pĂĄginas, 2 tablas, 2 figurasThe main inconvenience in determining nutrients in seawater by automatic methods is simply solved:
the preparation of a suitable blank which corrects the effect of the refractive index change on the recorded
signal. Two procedures are proposed, one physical (a simple equation to estimate the effect) and the other
chemical (removal of the dissolved phosphorus with ferric hydroxide).Support for this work came from CICYT (MAR88-0245 project) and
Conselleria de Pesca de la Xunta de GaliciaPeer reviewe
Swarm Learning for decentralized and confidential clinical machine learning
Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine. Patients with leukaemia can be identified using machine learning on the basis of their blood transcriptomes. However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation. Here, to facilitate the integration of any medical data from any data owner worldwide without violating privacy laws, we introduce Swarm Learningâa decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning. To illustrate the feasibility of using Swarm Learning to develop disease classifiers using distributed data, we chose four use cases of heterogeneous diseases (COVID-19, tuberculosis, leukaemia and lung pathologies). With more than 16,400 blood transcriptomes derived from 127 clinical studies with non-uniform distributions of cases and controls and substantial study biases, as well as more than 95,000 chest X-ray images, we show that Swarm Learning classifiers outperform those developed at individual sites. In addition, Swarm Learning completely fulfils local confidentiality regulations by design. We believe that this approach will notably accelerate the introduction of precision medicine
Hawaii, Ostmikronesien und Samoa : meine zweite SĂŒdseereise (1897 - 1899) zum Studium der Atolle und ihrer Bewohner
von Augustin KrÀme