6 research outputs found
Status of the low beta 0.07 cryomodules for SPIRAL2
International audienceThe status of the low beta cryomodules for SPIRAL2, supplied by the Irfu institute of CEA Saclay, is reported in this paper. We summarise in three parts the RF tests performed on the cavities in vertical cryostat, the RF power tests of the qualifying cryomodule performed in 2010 and the RF power tests performed in 2011 on the first cryomodule of the serie
Tests status of the SPIRAL 2 low beta cryomodules
TU5PFP041International audienceThe Spiral 2 project at GANIL aims at producing exoticion beams for Nuclear Physics. The accelerator of theprimary beam is a superconducting Linac designed toprovide 5 mA deuteron beams at 40 MeV. It will alsoallow accelerating stable ions of different Q/A valuesranging from protons to Q/A=1/6 heavy ions. Theaccelerator should be commissioned by the end of 2011,first beam in 2012. The first tests aiming to produceexotic beams are planned one year later.The superconducting LINAC consists of 12 low beta(0.07) quarter wave (88 MHz) superconducting (SC)cavities and 24 beta (0.14) SC cavities integrated in theircryomodule.The status of the low beta cryomodules, supplied by theIrfu institute of CEA Saclay, is reported in this paper. TheRF full power tests were performed on the qualifyingcryomodule at the end of 2008 and the beginning of 2009,and the tests of the first series cavity in vertical cryostatare in cours
Tests status of the SPIRAL 2 low beta cryomodules Tests Status of the SPIRAL 2
TU5PFP041International audienc
Tests of the low beta cavities and cryomodules for the SPIRAL 2 Linac
TUPPO003International audienc
Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Recent advances in deep reinforcement learning (RL) have led to considerable progress in many 2-player zero-sum games, such as Go, Poker and Starcraft. The purely adversarial nature of such games allows for conceptually simple and principled application of RL methods. However real-world settings are many-agent, and agent interactions are complex mixtures of common-interest and competitive aspects. We consider Diplomacy, a 7-player board game designed to accentuate dilemmas resulting from many-agent interactions. It also features a large combinatorial action space and simultaneous moves, which are challenging for RL algorithms. We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves. We also introduce a family of policy iteration methods that approximate fictitious play. With these methods, we successfully apply RL to Diplomacy: we show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements