114 research outputs found

    Summary of working group 1: Electron beams from plasmas

    Get PDF
    Abstract We briefly summarize the contributions that have been presented in the Working Group 1 sessions, dedicated to electron beams from plasmas

    Simulation Study of an LWFA-based Electron Injector for AWAKE Run 2

    Full text link
    The AWAKE experiment aims to demonstrate preservation of injected electron beam quality during acceleration in proton-driven plasma waves. The short bunch duration required to correctly load the wakefield is challenging to meet with the current electron injector system, given the space available to the beamline. An LWFA readily provides short-duration electron beams with sufficient charge from a compact design, and provides a scalable option for future electron acceleration experiments at AWAKE. Simulations of a shock-front injected LWFA demonstrate a 43 TW laser system would be sufficient to produce the required charge over a range of energies beyond 100 MeV. LWFA beams typically have high peak current and large divergence on exiting their native plasmas, and optimisation of bunch parameters before injection into the proton-driven wakefields is required. Compact beam transport solutions are discussed.Comment: Paper submitted to NIMA proceedings for the 3rd European Advanced Accelerator Concepts Workshop. 4 pages, 3 figures, 1 table Changes after revision: Figure 2: figures 2 and 3 of the previous version collated with plots of longitudinal electric field Line 45: E_0 = 96 GV/m Lines 147- 159: evaluation of beam loading made more accurate Lines 107 - 124: discussion of simulation geometry move

    Tunable X-ray source by Thomson scattering during laser-wakefield acceleration

    Full text link
    We report results on all-optical Thomson scattering intercepting the acceleration process in a laser wakefield accelerator. We show that the pulse collision position can be detected using transverse shadowgraphy which also facilitates alignment. As the electron beam energy is evolving inside the accelerator, the emitted spectrum changes with the scattering position. Such a configuration could be employed as accelerator diagnostic as well as reliable setup to generate x-rays with tunable energy

    Multi-objective and multi-fidelity Bayesian optimization of laser-plasma acceleration

    Full text link
    Beam parameter optimization in accelerators involves multiple, sometimes competing objectives. Condensing these multiple objectives into a single objective unavoidably results in bias towards particular outcomes that do not necessarily represent the best possible outcome for the operator in terms of parameter optimization. A more versatile approach is multi-objective optimization, which establishes the trade-off curve or Pareto front between objectives. Here we present first results on multi-objective Bayesian optimization of a simulated laser-plasma accelerator. We find that multi-objective optimization is equal or even superior in performance to its single-objective counterparts, and that it is more resilient to different statistical descriptions of objectives. As a second major result of our paper, we significantly reduce the computational costs of the optimization by choosing the resolution and box size of the simulations dynamically. This is relevant since even with the use of Bayesian statistics, performing such optimizations on a multi-dimensional search space may require hundreds or thousands of simulations. Our algorithm translates information gained from fast, low-resolution runs with lower fidelity to high-resolution data, thus requiring fewer actual simulations at highest computational cost. The techniques demonstrated in this paper can be translated to many different use cases, both computational and experimental

    Leveraging trust for joint multi-objective and multi-fidelity optimization

    Get PDF
    In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encounter prohibitively high costs in multi-dimensional optimizations of one or more objectives. Multi-fidelity approaches offer potential remedies by utilizing multiple, less costly information sources, such as low-resolution approximations in numerical simulations. However, integrating these two strategies presents a significant challenge. We propose the innovative use of a trust metric to facilitate the joint optimization of multiple objectives and data sources. Our methodology introduces a modified multi-objective (MO) optimization policy incorporating the trust gain per evaluation cost as one of the objectives of a Pareto optimization problem. This modification enables simultaneous MOMF optimization, which proves effective in establishing the Pareto set and front at a fraction of the cost. Two specific methods of MOMF optimization are presented and compared: a holistic approach selecting both the input parameters and the fidelity parameter jointly, and a sequential approach for benchmarking. Through benchmarks on synthetic test functions, our novel approach is shown to yield significant cost reductions—up to an order of magnitude compared to pure MO optimization. Furthermore, we find that joint optimization of the trust and objective domains outperforms sequentially addressing them. We validate our findings with the specific use case of optimizing particle-in-cell simulations of laser-plasma acceleration, highlighting the practical potential of our method in the Pareto optimization of highly expensive black-box functions. Implementation of the methods in existing Bayesian optimization frameworks is straightforward, with immediate extensions e.g. to batch optimization possible. Given their ability to handle various continuous or discrete fidelity dimensions, these techniques have wide-ranging applicability in tackling simulation challenges across various scientific computing fields such as plasma physics and fluid dynamics

    High-intensity laser generated neutrons

    Get PDF
    Hochintensitätslaser erzeugen im Fokus Lichtintensitäten, deren Feldstärke die rapide Beschleunigung vieler Elektronen und über die dadurch hervorgerufenen quasistatischen Felder die Beschleunigung von Ionen auslöst. Durch verschiedene Kernreaktionen (z.B. Fusion) dieser Ionen können Neutronen erzeugt werden. Ziel dieser Arbeit war es, einerseits die Neutronenausbeute im Hinblick auf Anwendungen als Neutronenquelle zu optimieren, und andererseits durch Spektroskopie der Neutronen Rückschlüsse auf die Verteilung der laserbeschleunigten Ionen zu ziehen. Diese wiederum können dann zum Verständnis der Beschleunigungsmechanismen und damit zur Optimierung der Ausbeute herangezogen werden. So gelang es im Laufe der Arbeit, die Erzeugung von bis zu 10^7 Neutronen pro Joule Laserenergie und die weitere Skalierbarkeit zu noch größeren Ausbeuten zu demonstrieren, so daß bei weiterer Entwicklung der duchschnittlichen Laserleistung in einigen Jahren mit einer Anwendung als Quelle für z.B. Neutronenradiographieanwendungen gerechnet werden kann. Andererseits gelang es, durch den Vergleich der experimentellen Neutronenspektren mit 3-dimensionalen PIC- und Monte-Carlo-Rechnungen die Beschleunigungsmechanismen in Laserfokus selbst und auf der Rückseite von dünnen Folientargets zu untersuchen und zu verstehen. So konnte erstmals ein direkter Vergleich dieser beiden Mechanismen angestellt werden, was dazu beitragen konnte, die seit längerem geführte Diskussion über die relative Stärke der beiden Mechanismen beizulegen. Schlußendlich war es zur Erzielung einer zur Spektroskopie ausreichenden Neutronenausbeute zunächst nötig, die dritte Verstärkerstufe des ATLAS-Lasers am Max-Planck-Institut für Quantenoptik in Betrieb zu nehmen und mit adaptiver Optik auszurüsten. Dadurch konnte die Neutronenausbeute um zwei Größenordnungen gesteigert werden. Die adaptive Optik ist die erste ihrer Art zur gleichzeitigen Korrektur großer Wellenfrontabweichungen von Nah- und Fernfeld und funktioniert mittlerweile im Routinebetrieb

    Factorization and Non-Factorization of In-Medium Four-Quark Condensates

    Full text link
    It is well-established for the vacuum case that in the limit of a large number of colors N_c the four-quark condensates factorize into products of the two-quark condensate. It is shown that in the combined large-N_c and linear-density approximation four-quark condensates do not factorize in a medium of pions (finite temperature system) but do factorize in a medium of nucleons (nuclear system).Comment: 4 page

    Is there still any Tc mystery in lattice QCD? Results with physical masses in the continuum limit III

    Get PDF
    The present paper concludes our investigations on the QCD cross-over transition temperatures with 2+1 staggered flavours and one-link stout improvement. We extend our previous two studies [Phys. Lett. B643 (2006) 46, JHEP 0906:088 (2009)] by choosing even finer lattices (NtN_t=16) and we work again with physical quark masses. The new results on this broad cross-over are in complete agreement with our earlier ones. We compare our findings with the published results of the hotQCD collaboration. All these results are confronted with the predictions of the Hadron Resonance Gas model and Chiral Perturbation Theory for temperatures below the transition region. Our results can be reproduced by using the physical spectrum in these analytic calculations. The findings of the hotQCD collaboration can be recovered by using a distorted spectrum which takes into account lattice discretization artifacts and heavier than physical quark masses. This analysis provides a simple explanation for the observed discrepancy in the transition temperatures between our and the hotQCD collaborations.Comment: 25 pages, 10 figures and 3 table
    • …
    corecore