13 research outputs found
A specific flagellum beating mode for inducing fusion in mammalian fertilization and kinetics of sperm internalization
International audienceThe salient phases of fertilization are gamete adhesion, membrane fusion, and internalization of the spermatozoon into the oocyte but the precise timeline and the molecular, membrane and cell mechanisms underlying these highly dynamical events are far from being established. The high motility of the spermatozoa and the unpredictable location of sperm/egg fusion dramatically hinder the use of real time imaging optical techniques that should directly provide the dynamics of cell events. Using an approach based on microfluidics technology, the sperm/egg interaction zone was imaged with the best front view, and the timeline of the fertilization events was established with an unparalleled temporal accuracy from the onset of gamete contact to full sperm DNA decondensation. It reveals that a key element of the adhesion phase to initiate fusion is the oscillatory motion of the sperm head on the oocyte plasma membrane generated by a specific flagellum-beating mode. It also shows that the incorporation of the spermatozoon head is a two steps process that includes simultaneous diving, tilt, and plasma membrane degradation of the sperm head into the oocyte and subsequent DNA decondensation
A cyclotron trap for antiprotonic atom x-ray spectroscopy in gaseous targets
International audienceQuantum electrodynamics (QED) is a foundation of modern physics, whose detailed study is one of the frontiers for Beyond Standard Model searches. In this domain, new physics may appear as minute differences between theory and experiments, accessible with extremely high precision QED tests. While extensive studies have been performed for light systems (hydrogen, antihydrogen, muonic hydrogen, etc.), few high precision measurements exist for high-Z systems with extremely high Coulomb fields.Decades of work has sought to study this strong-field QED regime via spectroscopy of highly charged ions (HCI) [1, 2]. These systems, however, are plagued by high theoretical uncertainty in the transition energies due to poorly-known nuclear radii, which clouds high-order QED contributions. In order to circumvent such issues, novel studies have been proposed involving exotic atoms, where the orbiting electrons are replaced with a more massive particle, such as a negatively charged muon [3] or antiproton [4]. In these exotic systems, a special class of Rydberg transitions can be found where QED effects are large, while nuclear uncertainties are negligible, making them prime candidates for high-precision strong-field QED tests [5].The PAX experiment is a new effort to study Bound State QED (BSQED) in antiprotonic systems, where a range of gaseous elements will be subjected to a low-energy antiproton beam, coming from CERNâs Extra Low Energy Antiprotons (ELENA) ring, leading to Ì capture in circular Rydberg states, followed by a cascade of Auger and radiative transitions, expelling the remaining electrons and resulting in hydrogen-like antiprotonic systems [4]. In order to assure this, it is necessary to mitigate the electron refilling by using a low-pressure gaseous cell to slow down and capture the antiprotons.A novel cyclotron trap is currently being designed, with tools such as COMSOLÂźand GEANT4, in order to be implemented in PAX. Composed of two iron-core coils, the generated magnetic fields of 0.5 T in the interaction region are able to trap the incoming 100 keV antiproton beam, degraded to 10 keV, and slow it down in the process, until capture occurs and the subsequent cascade of x-ray transitions is measured with a state-of-the-art Transition Edge Sensor (TES) detector [6]. Aside from the trap itself, the simulation incorporates realistic particle scattering anddeceleration, as well as the necessary charged particle optics to control a highly dispersive beam.[1] Indelicato, P. Journal of Physics B: Atomic, Molecular and Optical Physics 52, 232001. https://dx.doi.org/10.1088/1361-6455/ab42c9 (Nov. 2019).[2] Loetzsch, R. et al. English. Nature 625, 673â678. ISSN: 0028-0836 (Jan. 2024).[3] Okumura, T. et al. Phys. Rev. Lett. 130, 173001. https://link.aps.org/doi/10.1103/PhysRevLett.130.173001 (17 Apr. 2023).[4] Gotta, D., Rashid, K., Fricke, B., Indelicato, P. & Simons, L. M. The European Physical Journal D 47. ISSN: 1434-6079. http://dx.doi.org/10.1140/epjd/e2008-00025-3 (Feb. 2008).[5] Paul, N., Bian, G., Azuma, T., Okada, S. & Indelicato, P. Phys. Rev. Lett. 126, 173001. https://link.aps.org/doi/10.1103/PhysRevLett.126.173001 (17 Apr. 2021).[6] Yan, D. et al. IEEE Transactions on Applied Superconductivity 31, 1â5 (2021)
Graph Neural Network-Based Track Finding in the LHCb Vertex Detector
The next decade will see an order of magnitude increase in data collected by high-energy physics experiments, driven by the High-Luminosity LHC (HL-LHC). The reconstruction of charged particle trajectories (tracks) has always been a critical part of offline data processing pipelines. The complexity of HL-LHC data will however increasingly mandate track finding in all stages of an experiment's real-time processing. This paper presents a GNN-based track-finding pipeline tailored for the Run 3 LHCb experiment's vertex detector and benchmarks its physics performance and computational cost against existing classical algorithms on GPU architectures. A novelty of our work compared to existing GNN tracking pipelines is batched execution, in which the GPU evaluates the pipeline on hundreds of events in parallel. We evaluate the impact of neural-network quantisation on physics and computational performance, and comment on the outlook for GNN tracking algorithms for other parts of the LHCb track-finding pipeline
Graph Neural Network-Based Pipeline for Track Finding in the Velo at LHCb
https://indico.cern.ch/event/1252748/contributions/5521484/International audienceOver the next decade, increases in instantaneous luminosity and detector granularity will amplify the amount of data that has to be analysed by high-energy physics experiments, whether in real time or offline, by an order of magnitude.The reconstruction of charged particle tracks, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the very first stages of the real time processing to enable experiments to achieve their physics goals.Graph Neural Networks (GNNs) have received a great deal of attention in the community because their computational complexity scales nearly linearly with the number of hits in the detector, unlike conventional algorithms which often scale quadratically or worse.This paper presents \texttt{ETX4VELO}, a GNN-based track-finding pipeline tailored for the Run 3 LHCb experiment's Vertex Locator, in the context of LHCb's fully GPU-based first-level trigger system, Allen. Currently implemented in Python, \texttt{ETX4VELO} offers the ability to reconstruct tracks with shared hits using a novel triplet-based method. When benchmarked against the traditional track-finding algorithm in Allen, this GNN-based approach not only matches but occasionally surpasses its physics performance. In particular, the fraction of fake tracks is reduced from over 2\% to below 1\% and the efficiency to reconstruct electrons is improved.While achieving comparable physics performance is a milestone, the immediate priority remains implementing \texttt{ETX4VELO} in Allen in order to determine and optimise its throughput, to meet the demands of this high-rate environment
Stability of C<sub>12</sub>E<sub><i>j</i></sub> Bilayers Probed with Adhesive Droplets
The stability of model surfactant
bilayers from the polyÂ(ethylene
glycol) mono-<i>n</i>-dodecyl ether (C<sub>12</sub>E<sub><i>j</i></sub>) family was probed. The surfactant bilayers
were formed by the adhesion of emulsion droplets. We generated C<sub>12</sub>E<sub><i>j</i></sub> bilayers by forming water-in-oil
(w/o) emulsions with saline water droplets, covered by the surfactant,
in a silicone and octane oil mixture. Using microfluidics, we studied
the stability of those bilayers. C<sub>12</sub>E<sub>1</sub> allowed
only short-lived bilayers whereas C<sub>12</sub>E<sub>2</sub> bilayers
were stable over a wide range of oil mixtures. At high C<sub>12</sub>E<sub>2</sub> concentration, a two-phase region was displayed in
the phase diagram: bilayers formed by the adhesion of two water droplets
and Janus-like particles consisting of adhering aqueous and amphiphilic
droplets. C<sub>12</sub>E<sub>8</sub> and C<sub>12</sub>E<sub>25</sub> did not mediate bilayer formation and caused phase inversion leading
to o/w emulsion. With intermediate C<sub>12</sub>E<sub>4</sub> and
C<sub>12</sub>E<sub>5</sub> surfactants, both w/o and o/w emulsions
were unstable. We provided the titration of the C<sub>12</sub>E<sub>2</sub> bilayer with C<sub>12</sub>E<sub>4</sub> and C<sub>12</sub>E<sub>5</sub> to study and predict their stability behavior
XeLab: a test platform for xenon TPC instrumentation
International audienceXenon double phase TPCs have shown the best sensitivities for dark matter direct searches over a large parameter space.However difficulties in the construction large scale TPC have already arisen in the current detectors and will be even more challengingin the next generation one. Of critical importance are the construction of meter scale electrodes with negligible sagging and high optical transparencybut also the control of instrumental background such as single electron emission. Xelab is a system equipped with a small double phase xenon TPCcooled with liquid nitrogen and a xenon recuperation module primarily designed for the test of innovative concept of floating electrodesbut will also serve as a platform for instrumental development for xenon based TPC.We present the design and realisation of XeLab and the baseline of electrodes that we plan to test
Production of antihydrogen atoms by 6Â keV antiprotons through a positronium cloud
We report on the first production of an antihydrogen beam by charge exchange of 6.1Â keV antiprotons with a cloud of positronium in the GBAR experiment at CERN. The 100Â keV antiproton beam delivered by the AD/ELENA facility was further decelerated with a pulsed drift tube. A 9Â MeV electron beam from a linear accelerator produced a low energy positron beam. The positrons were accumulated in a set of two PenningâMalmberg traps. The positronium target cloud resulted from the conversion of the positrons extracted from the traps. The antiproton beam was steered onto this positronium cloud to produce the antiatoms. We observe an excess over background indicating antihydrogen production with a significance of 3â4 standard deviations.ISSN:1434-6044ISSN:1434-605
The LHCb upgrade I
International audienceThe LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software
The LHCb upgrade I
The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software
The LHCb upgrade I
International audienceThe LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software