128 research outputs found

    Digital Collections of Examples in Mathematical Sciences

    Get PDF
    Some aspects of Computer Algebra (notably Computation Group Theory and Computational Number Theory) have some good databases of examples, typically of the form "all the X up to size n". But most of the others, especially on the polynomial side, are lacking such, despite the utility they have demonstrated in the related fields of SAT and SMT solving. We claim that the field would be enhanced by such community-maintained databases, rather than each author hand-selecting a few, which are often too large or error-prone to print, and therefore difficult for subsequent authors to reproduce.Comment: Presented at 8th European Congress of Mathematician

    Machine-Assisted Proofs

    Get PDF

    The formal verification of the ctm approach to forcing

    Full text link
    We discuss some highlights of our computer-verified proof of the construction, given a countable transitive set-model MM of ZFC\mathit{ZFC}, of generic extensions satisfying ZFC+¬CH\mathit{ZFC}+\neg\mathit{CH} and ZFC+CH\mathit{ZFC}+\mathit{CH}. Moreover, let R\mathcal{R} be the set of instances of the Axiom of Replacement. We isolated a 21-element subset Ω⊆R\Omega\subseteq\mathcal{R} and defined F:R→R\mathcal{F}:\mathcal{R}\to\mathcal{R} such that for every Φ⊆R\Phi\subseteq\mathcal{R} and MM-generic GG, M⊨ZC∪F“Φ∪ΩM\models \mathit{ZC} \cup \mathcal{F}\text{``}\Phi \cup \Omega implies M[G]⊨ZC∪Φ∪{¬CH}M[G]\models \mathit{ZC} \cup \Phi \cup \{ \neg \mathit{CH} \}, where ZC\mathit{ZC} is Zermelo set theory with Choice. To achieve this, we worked in the proof assistant Isabelle, basing our development on the Isabelle/ZF library by L. Paulson and others.Comment: 20pp + 14pp in bibliography & appendices, 2 table

    Computational modelling and simulations in tourism: A primer

    Get PDF
    Abstract The aim of this contribution is to briefly sketch and discuss the main issues that concern the activities of modelling and simulating complex phenomena and systems. The focus is on numerical and computational techniques. We discuss the validity of these methods and examine the different steps to be taken for ensuring a correct, accurate and reliable implementation. The approach is essentially of general methodological nature, regardless of specific techniques or tools

    Framing Global Mathematics

    Get PDF
    This open access book is about the shaping of international relations in mathematics over the last two hundred years. It focusses on institutions and organizations that were created to frame the international dimension of mathematical research. Today, striking evidence of globalized mathematics is provided by countless international meetings and the worldwide repository ArXiv. The text follows the sinuous path that was taken to reach this state, from the long nineteenth century, through the two wars, to the present day. International cooperation in mathematics was well established by 1900, centered in Europe. The first International Mathematical Union, IMU, founded in 1920 and disbanded in 1932, reflected above all the trauma of WW I. Since 1950 the current IMU has played an increasing role in defining mathematical excellence, as is shown both in the historical narrative and by analyzing data about the International Congresses of Mathematicians. For each of the three periods discussed, interactions are explored between world politics, the advancement of scientific infrastructures, and the inner evolution of mathematics. Readers will thus take a new look at the place of mathematics in world culture, and how international organizations can make a difference. Aimed at mathematicians, historians of science, scientists, and the scientifically inclined general public, the book will be valuable to anyone interested in the history of science on an international level

    Advanced Statistical Learning Techniques for High-Dimensional Imaging Data

    Get PDF
    With the rapid development of neuroimaging techniques, scientists are interested in identifying imaging biomarkers that are related to different subtypes or transitional stages of various cancers, neuropsychiatric diseases, and neurodegenerative diseases. Scalar-on-image models have been proven to demonstrate good performance in such tasks. However, due to their high dimensionality, traditional methods may not work well in the estimation of such models. Some existing penalization methods may improve the performance but fail to take the complex spatial structure of the neuroimaging data into account. In the past decade, the spatially regularized methods have been popular due to their good performance in terms of both estimation and prediction. Despite the progress, many challenges still remain. In particular, most existing image classification methods focus on binary classification and consequently may underperform for the tasks of classifying diseases with multiple subtypes or transitional stages. Moreover, neuroimaging data usually present significant heterogeneity across subjects. As a result, existing methods for homogeneous data may fail. In this dissertation, we investigate several new statistical learning techniques and propose a Spatial Multi-category Angle based Classifier (SMAC), a Subject Variant Scalar-on-Image Regression (SVSIR) model and a Masking Convolutional Neural Network (MCNN) model to address the above issues. Extensive simulation studies and practical applications in neuroscience are presented to demonstrate the effectiveness of our proposed methods.Doctor of Philosoph

    A Study Of The Mathematics Of Deep Learning

    Get PDF
    "Deep Learning"/"Deep Neural Nets" is a technological marvel that is now increasingly deployed at the cutting-edge of artificial intelligence tasks. This ongoing revolution can be said to have been ignited by the iconic 2012 paper from the University of Toronto titled ``ImageNet Classification with Deep Convolutional Neural Networks'' by Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton. This paper showed that deep nets can be used to classify images into meaningful categories with almost human-like accuracies! As of 2020 this approach continues to produce unprecedented performance for an ever widening variety of novel purposes ranging from playing chess to self-driving cars to experimental astrophysics and high-energy physics. But this new found astonishing success of deep neural nets in the last few years has been hinged on an enormous amount of heuristics and it has turned out to be extremely challenging to be mathematically rigorously explainable. In this thesis we take several steps towards building strong theoretical foundations for these new paradigms of deep-learning. Our proofs here can be broadly grouped into three categories, 1. Understanding Neural Function Spaces We show new circuit complexity theorems for deep neural functions over real and Boolean inputs and prove classification theorems about these function spaces which in turn lead to exact algorithms for empirical risk minimization for depth 2 ReLU nets. We also motivate a measure of complexity of neural functions and leverage techniques from polytope geometry to constructively establish the existence of high-complexity neural functions. 2. Understanding Deep Learning Algorithms We give fast iterative stochastic algorithms which can learn near optimal approximations of the true parameters of a \relu gate in the realizable setting. (There are improved versions of this result available in our papers https://arxiv.org/abs/2005.01699 and https://arxiv.org/abs/2005.04211 which are not included in the thesis.) We also establish the first ever (a) mathematical control on the behaviour of noisy gradient descent on a ReLU gate and (b) proofs of convergence of stochastic and deterministic versions of the widely used adaptive gradient deep-learning algorithms, RMSProp and ADAM. This study also includes a first-of-its-kind detailed empirical study of the hyper-parameter values and neural net architectures when these modern algorithms have a significant advantage over classical acceleration based methods. 3. Understanding The Risk Of (Stochastic) Neural Nets We push forward the emergent technology of PAC-Bayesian bounds for the risk of stochastic neural nets to get bounds which are not only empirically smaller than contemporary theories but also demonstrate smaller rates of growth w.r.t increase in width and depth of the net in experimental tests. These critically depend on our novel theorems proving noise resilience of nets. This work also includes an experimental investigation of the geometric properties of the path in weight space that is traced out by the net during the training. This leads us to uncover certain seemingly uniform and surprising geometric properties of this process which can potentially be leveraged into better bounds in future

    Mathematics & Statistics 2017 APR Self-Study & Documents

    Get PDF
    UNM Mathematics & Statistics APR self-study report, review team report, response report, and initial action plan for Spring 2017, fulfilling requirements of the Higher Learning Commission
    • …
    corecore