5,500 research outputs found

    Polynomial-time Tensor Decompositions with Sum-of-Squares

    Full text link
    We give new algorithms based on the sum-of-squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to sum-of-squares relaxations. To enable this analysis we augment sum-of-squares relaxations with spectral analogs of maximum entropy constraints.Comment: to appear in FOCS 201

    Measuring Hubble's Constant in our Inhomogeneous Universe

    Get PDF
    Recent observations of Cepheids in the Virgo cluster have bolstered the evidence that supports a Hubble constant in 70-90 km/s/Mpc range. This evidence, by and large, probes the expansion of the Universe within 100 Mpc. We investigate the possibility that the expansion rate within this region is systematically higher than the true expansion rate due to the presence of a local, large underdense region or void. We begin by calculating the expected deviations between the locally measured Hubble constant and the true Hubble constant for a variety of models. We also discuss the expected correlations between these deviations and mass fluctuation for the sample volume. We find that the fluctuations are small for the standard cold dark matter as well as mixed dark matter models but can be substantial in a number of interesting and viable nonstandard scenarios. However, deviations in the Hubble flow for a region of radius 200 Mpc are small for virtually all reasonable models. Therefore, methods based on supernovae or the Sunyaev-Zel'dovich effect, which can probe 200 Mpc scales, will be essential in determining the true Hubble constant. We discuss, in detail, the fluctuations induced in the cosmic background radiation by voids at the last scattering surface. In addition, we discuss the dipole and quadrupole fluctuations one would expect if the void enclosing us is aspherical or if we lie off-center.Comment: 20 pages (58K), 8 Postscript figures (111K compressed); Submitted to MNRAS. Postscript source available at http://astro.queensu.ca/~dursi/preprints

    Efficient Computation of the Kauffman Bracket

    Full text link
    This paper bounds the computational cost of computing the Kauffman bracket of a link in terms of the crossing number of that link. Specifically, it is shown that the image of a tangle with gg boundary points and nn crossings in the Kauffman bracket skein module is a linear combination of O(2g)O(2^g) basis elements, with each coefficient a polynomial with at most nn nonzero terms, each with integer coefficients, and that the link can be built one crossing at a time as a sequence of tangles with maximum number of boundary points bounded by CnC\sqrt{n} for some C.C. From this it follows that the computation of the Kauffman bracket of the link takes time and memory a polynomial in nn times $2^{C\sqrt{n}}.

    Intelligent Identification for Rock-Mineral Microscopic Images Using Ensemble Machine Learning Algorithms

    Get PDF
    It is significant to identify rock-mineral microscopic images in geological engineering. The task of microscopic mineral image identification, which is often conducted in the lab, is tedious and time-consuming. Deep learning and convolutional neural networks (CNNs) provide a method to analyze mineral microscopic images efficiently and smartly. In this research, the transfer learning model of mineral microscopic images is established based on Inception-v3 architecture. The four mineral image features, including K-feldspar (Kf), perthite (Pe), plagioclase (Pl), and quartz (Qz or Q), are extracted using Inception-v3. Based on the features, the machine learning methods, logistic regression (LR), support vector machine (SVM), random forest (RF), k-nearest neighbors (KNN), multilayer perceptron (MLP), and gaussian naive Bayes (GNB), are adopted to establish the identification models. The results are evaluated using 10-fold cross-validation. LR, SVM, and MLP have a significant performance among all the models, with accuracy of about 90.0%. The evaluation result shows LR, SVM, and MLP are the outstanding single models in high-dimensional feature analysis. The three models are also selected as the base models in model stacking. The LR model is also set as the meta classifier in the final prediction. The stacking model can achieve 90.9% accuracy, which is higher than all the single models. The result also shows that model stacking effectively improves model performance

    Darboux and binary Darboux transformations for discrete integrable systems 1. Discrete potential KdV equation

    Full text link
    The Hirota-Miwa equation can be written in `nonlinear' form in two ways: the discrete KP equation and, by using a compatible continuous variable, the discrete potential KP equation. For both systems, we consider the Darboux and binary Darboux transformations, expressed in terms of the continuous variable, and obtain exact solutions in Wronskian and Grammian form. We discuss reductions of both systems to the discrete KdV and discrete potential KdV equations, respectively, and exploit this connection to find the Darboux and binary Darboux transformations and exact solutions of these equations

    Many Lenses with One Focus: Making Philosophy Learning Meaningful through Collaborative Design

    Get PDF
    Utilizing the Understanding by Design (UbD) framework, a lead philosophy instructor and an instructional designer collaborated with seven other faculty members to create Great Ideas in Philosophy for online asynchronous delivery. We presented a broad array of topics in philosophy and provided substantial practices in “doing” philosophy, aiming to create a welcoming space for a diverse student body, to help students see philosophy as a diverse field, and to provide an engaging and meaningful learning experience for students. Student feedback and final project presentations demonstrated significant learning growth in students taking this newly designed Great Ideas in Philosophy. This collaborative development method can be applied to many undergraduate- and graduate-level survey courses

    Sequence- and structural-selective nucleic acid binding revealed by the melting of mixtures

    Get PDF
    A simple method for the detection of sequence- and structural-selective ligand binding to nucleic acids is described. The method is based on the commonly used thermal denaturation method in which ligand binding is registered as an elevation in the nucleic acid melting temperature (T(m)). The method can be extended to yield a new, higher -throughput, assay by the simple expediency of melting designed mixtures of polynucleotides (or oligonucleotides) with different sequences or structures of interest. Upon addition of ligand to such mixtures at low molar ratios, the T(m) is shifted only for the nucleic acid containing the preferred sequence or structure. Proof of principle of the assay is provided using first a mixture of polynucleotides with different sequences and, second, with a mixture containing DNA, RNA and two types of DNA:RNA hybrid structures. Netropsin, ethidium, daunorubicin and actinomycin, ligands with known sequence preferences, were used to illustrate the method. The applicability of the approach to oligonucleotide systems is illustrated by the use of simple ternary and binary mixtures of defined sequence deoxyoligonucleotides challenged by the bisanthracycline WP631. The simple mixtures described here provide proof of principle of the assay and pave the way for the development of more sophisticated mixtures for rapidly screening the selectivity of new nucleic acid binding compounds
    • …
    corecore