79,497 research outputs found

    Series of Hessian-Vector Products for Tractable Saddle-Free Newton Optimisation of Neural Networks

    Full text link
    Despite their popularity in the field of continuous optimisation, second-order quasi-Newton methods are challenging to apply in machine learning, as the Hessian matrix is intractably large. This computational burden is exacerbated by the need to address non-convexity, for instance by modifying the Hessian's eigenvalues as in Saddle-Free Newton methods. We propose an optimisation algorithm which addresses both of these concerns - to our knowledge, the first efficiently-scalable optimisation algorithm to asymptotically use the exact (eigenvalue-modified) inverse Hessian. Our method frames the problem as a series which principally square-roots and inverts the squared Hessian, then uses it to precondition a gradient vector, all without explicitly computing or eigendecomposing the Hessian. A truncation of this infinite series provides a new optimisation algorithm which is scalable and comparable to other first- and second-order optimisation methods in both runtime and optimisation performance. We demonstrate this in a variety of settings, including a ResNet-18 trained on CIFAR-10.Comment: 36 pages, 10 figures, 5 tables. Submitted to TMLR. First two authors' order randomise

    Optimisation of the SHiP Beam Dump Facility with generative surrogate models

    Get PDF
    The SHiP experiment is a proposed fixed target experiment at the CERN SPS to search for new particles. To operate optimally, the experiment should feature a zero background environment. The residual muons flying from the target are one of the largest sources of the background. To remove them from the detector acceptance, a dedicated muon shield magnet is introduced in the experiment. The shield should be optimised to deliver the best physics performance at the lowest cost. The optimisation procedure is very computationally costly and, thus, requires ded- icated methods. This thesis comprises of a detailed description of a new machine learning method for the optimisation, comparisons to existing techniques, and the application of the method to optimising the muon shield magnet. In addition, the set of technological and simulation problems affecting the optimisation is discussed in details. Finally, the set of requirements for the muon shield prototype design and verification is presented.Open Acces

    Text Augmentation: Inserting markup into natural language text with PPM Models

    Get PDF
    This thesis describes a new optimisation and new heuristics for automatically marking up XML documents. These are implemented in CEM, using PPMmodels. CEM is significantly more general than previous systems, marking up large numbers of hierarchical tags, using n-gram models for large n and a variety of escape methods. Four corpora are discussed, including the bibliography corpus of 14682 bibliographies laid out in seven standard styles using the BIBTEX system and markedup in XML with every field from the original BIBTEX. Other corpora include the ROCLING Chinese text segmentation corpus, the Computists’ Communique corpus and the Reuters’ corpus. A detailed examination is presented of the methods of evaluating mark up algorithms, including computation complexity measures and correctness measures from the fields of information retrieval, string processing, machine learning and information theory. A new taxonomy of markup complexities is established and the properties of each taxon are examined in relation to the complexity of marked-up documents. The performance of the new heuristics and optimisation is examined using the four corpora

    Inductive machine learning of optimal modular structures: Estimating solutions using support vector machines

    Get PDF
    Structural optimization is usually handled by iterative methods requiring repeated samples of a physics-based model, but this process can be computationally demanding. Given a set of previously optimized structures of the same topology, this paper uses inductive learning to replace this optimization process entirely by deriving a function that directly maps any given load to an optimal geometry. A support vector machine is trained to determine the optimal geometry of individual modules of a space frame structure given a specified load condition. Structures produced by learning are compared against those found by a standard gradient descent optimization, both as individual modules and then as a composite structure. The primary motivation for this is speed, and results show the process is highly efficient for cases in which similar optimizations must be performed repeatedly. The function learned by the algorithm can approximate the result of optimization very closely after sufficient training, and has also been found effective at generalizing the underlying optima to produce structures that perform better than those found by standard iterative methods
    corecore