69 research outputs found

    Distributed Training and Optimization Of Neural Networks

    Full text link
    Deep learning models are yielding increasingly better performances thanks to multiple factors. To be successful, model may have large number of parameters or complex architectures and be trained on large dataset. This leads to large requirements on computing resource and turn around time, even more so when hyper-parameter optimization is done (e.g search over model architectures). While this is a challenge that goes beyond particle physics, we review the various ways to do the necessary computations in parallel, and put it in the context of high energy physics.Comment: 20 pages, 4 figures, 2 tables, Submitted for review. To appear in "Artificial Intelligence for Particle Physics", World Scientific Publishin
    • …
    corecore