6 research outputs found

    Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning

    Full text link
    Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams (DDs) have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. It is well known that the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem. In this paper, we propose an innovative and generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with relaxed and restricted DDs. We apply the approach to both the Maximum Independent Set Problem and the Maximum Cut Problem. Experimental results on synthetic instances show that the deep reinforcement learning approach, by achieving tighter objective function bounds, generally outperforms ordering methods commonly used in the literature when the distribution of instances is known. To the best knowledge of the authors, this is the first paper to apply machine learning to directly improve relaxation bounds obtained by general-purpose bounding mechanisms for combinatorial optimization problems.Comment: Accepted and presented at AAAI'1

    Physics-aware modelling of an accelerated particle cloud

    No full text
    International audienceParticle accelerator simulators, pivotal for acceleration optimization, are computationally heavy; surrogate, machine learning-based models are thus trained to facilitate the accelerator fine-tuning. While these current models are efficient, they do not allow for simulating the beam at the individual particle-level. This paper adapts point cloud deep learning methods, developed for computer vision, to model particle beams

    Physics-aware modelling of an accelerated particle cloud

    No full text
    International audienceParticle accelerator simulators, pivotal for acceleration optimization, are computationally heavy; surrogate, machine learning-based models are thus trained to facilitate the accelerator fine-tuning. While these current models are efficient, they do not allow for simulating the beam at the individual particle-level. This paper adapts point cloud deep learning methods, developed for computer vision, to model particle beams

    Physics-aware modelling of an accelerated particle cloud

    No full text
    International audienceParticle accelerator simulators, pivotal for acceleration optimization, are computationally heavy; surrogate, machine learning-based models are thus trained to facilitate the accelerator fine-tuning. While these current models are efficient, they do not allow for simulating the beam at the individual particle-level. This paper adapts point cloud deep learning methods, developed for computer vision, to model particle beams

    Surrogate Model for Linear Accelerator: A fast Neural Network approximation of ThomX's simulator

    No full text
    International audienceAccelerator physics simulators accurately predict the propagation of a beam in a particle accelerator, taking into account the particle interactions (a.k.a. space charge) inside the beam. A precise estimation of the space charge is required to understand the potential errors causing the difference between simulations and reality. Unfortunately, the space charge is computationally expensive, needing the simulation of a few dozen thousand particles to obtain an accurate prediction. This paper presents a Machine Learning-based approximation of the simulator output, a.k.a. surrogate model. Such an inexpensive surrogate model can support multiple experiments in parallel, allowing the wide exploration of the simulator control parameters. While the state of the art is limited to considering a few such parameters with a restricted range, the proposed approach, LinacNet, scales up to one hundred parameters with wide domains. LinacNet uses a large-size particle cloud to represent the beam and estimates the particle behavior using a dedicated neural network architecture reflecting the architecture of a Linac and its different physical regimes

    First Electron Beam of the ThomX Project

    No full text
    International audienceThe ThomX accelerator beam commissioning phase is now ongoing. The 50 MeV electron accelerator complex consists of a 50 MeV linear accelerator and a pulsed mode ring. It is dedicated to the production of X-rays by Compton backscattering. The performance of the beam at the interaction point is demanding in terms of emittance, charge, energy spread and transverse size. The choice of an undamped ring in pulsed mode also stresses the performance of the beam from the linear accelerator. Thus, commissioning includes a beam based alignment and a simulation/experimental matching procedure to reach the X-ray beam requirements. We will present the first 50 MeV electron beam obtained with ThomX and its characteristics
    corecore