1,808,036 research outputs found

    Identifying Product Order with Restricted Boltzmann Machines

    Full text link
    Unsupervised machine learning via a restricted Boltzmann machine is an useful tool in distinguishing an ordered phase from a disordered phase. Here we study its application on the two-dimensional Ashkin-Teller model, which features a partially ordered product phase. We train the neural network with spin configuration data generated by Monte Carlo simulations and show that distinct features of the product phase can be learned from non-ergodic samples resulting from symmetry breaking. Careful analysis of the weight matrices inspires us to define a nontrivial machine-learning motivated quantity of the product form, which resembles the conventional product order parameter.Comment: 9 pages, 11 figure

    Learning DNF Expressions from Fourier Spectrum

    Full text link
    Since its introduction by Valiant in 1984, PAC learning of DNF expressions remains one of the central problems in learning theory. We consider this problem in the setting where the underlying distribution is uniform, or more generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed that in this setting a DNF expression can be efficiently approximated from its "heavy" low-degree Fourier coefficients alone. This is in contrast to previous approaches where boosting was used and thus Fourier coefficients of the target function modified by various distributions were needed. This property is crucial for learning of DNF expressions over smoothed product distributions, a learning model introduced by Kalai et al. (2009) and inspired by the seminal smoothed analysis model of Spielman and Teng (2001). We introduce a new approach to learning (or approximating) a polynomial threshold functions which is based on creating a function with range [-1,1] that approximately agrees with the unknown function on low-degree Fourier coefficients. We then describe conditions under which this is sufficient for learning polynomial threshold functions. Our approach yields a new, simple algorithm for approximating any polynomial-size DNF expression from its "heavy" low-degree Fourier coefficients alone. Our algorithm greatly simplifies the proof of learnability of DNF expressions over smoothed product distributions. We also describe an application of our algorithm to learning monotone DNF expressions over product distributions. Building on the work of Servedio (2001), we give an algorithm that runs in time \poly((s \cdot \log{(s/\eps)})^{\log{(s/\eps)}}, n), where ss is the size of the target DNF expression and \eps is the accuracy. This improves on \poly((s \cdot \log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n) bound of Servedio (2001).Comment: Appears in Conference on Learning Theory (COLT) 201

    Sustainable and traditional product innovation without scale and experience, but only for KIBS!

    Get PDF
    This study analyzes the ideal strategic trajectory for sustainable and traditional product innovation. Using a sample of 74 Costa Rican high-performance businesses for 2016, we employ fuzzy set analysis (qualitative comparative analysis) to evaluate how the development of sustainable and traditional product innovation strategies is conditioned by the business’ learning capabilities and entrepreneurial orientation in knowledge-intensive (KIBS) and non-knowledge-intensive businesses. The results indicate two ideal strategic configurations of product innovation. The first strategic configuration to reach maximum product innovation requires the presence of KIBS firms that have both an entrepreneurial and learning orientation, while the second configuration is specific to non-KIBS firms with greater firm size and age along with entrepreneurial and learning orientation. KIBS firms are found to leverage the knowledge-based and customer orientations that characterize their business model in order to compensate for the shortage of important organizational characteristics—which we link to liabilities or smallness and newness—required to achieve optimal sustainable and traditional product innovation.Peer ReviewedPostprint (published version

    THE RELATION BETWEEN THE INTENSITY OF THE USE OF COMPUTER LABORATORY FACILITY AND LEARNING MOTIVATION AND LEARNING RESULT OF INFORMATION AND COMMUNICATION TECHNOLOGY (TIK) OF GRADE XI COMPUTER AND NETWORK ENGINEERING IN SMK NEGERI 2 DEPOK SLEMAN ACADEMIC YEAR 2012/2013

    Get PDF
    This study aimed to discover (1) the relation between intensity of the use of computer laboratory facility and TIK learning result of grade 11 computer and network engineering students of SMKN 2 Depok Sleman Academic Year 2012/2013, (2) the relation between learning motivation and TIK learning result of grade XI computer and network engineering students of SMKN 2 Depok Sleman Academic Year 2012/2013, (3) the relation between the intensity of the use of computer laboratory facility and learning motivation at the same time and TIK learning result of grade XI computer and network engineering students of SMKN 2 Depok Sleman Academic Year 2012/2013. This study was a descriptive correlation Ex-post Facto research with quantitative approach. The population of this study was 62 grade XI computer and network engineering students of SMKN 2 Depok Sleman Academic Year 2012/2013. Data collection method for the Intensity of the Use of Computer Laboratory Facility and Learning Motivation variables was closed questionnaires with likert scale, while for TIK Learning Result variable the method was documentation of TIK scores in report cards from the first semester to the second semester. Research instrument validity was tested with item analysis calculated with Product moment correlation formula. Instrument reliability was calculated with Alpha Cronbach formula. Data analysis technique to test hypothesis 1 and 2 was Product moment correlation, while hypothesis 3 used multiple regression analysis technique with two predictors. Research results showed that (1) there was positive relation between the Intensity of the Used of Computer Laboratory Facility (X1) and TIK Learning Result (Y) which showed in rx1y value 0.515 and r2x1y value 0.265 as well as SE 14.6% and SR 44.2%. (2) there was positive relation between Learning Motivation (X2) and TIK Learning Result (Y) which showed in rx2y value 0.532 and r2x2y value 0.283 as well as SE 18.3% and SR 55.8%. (3) there was positive relation between the Intensity of the Use of Computer Laboratory Facility (X1), and Learning Motivation (X2) at the same time and TIK Learning Result (Y) which was showed by multiple regression coefficient Ry(1,2) 0.573 and moderate correlation interpretation and coefficient of determination (R2) score 0.329 which meant that 32.9 % of change in TIK Learning Result variable (Y) could be explained by Intensity of the Use of Computer Laboratory Facility (X1) and Learning Motivation (X2) variables. Keywords : TIK Learning Result, Intensity of the Use of Coputer Laboratory Facility, Learning Motivation

    Moment-Matching Polynomials

    Full text link
    We give a new framework for proving the existence of low-degree, polynomial approximators for Boolean functions with respect to broad classes of non-product distributions. Our proofs use techniques related to the classical moment problem and deviate significantly from known Fourier-based methods, which require the underlying distribution to have some product structure. Our main application is the first polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces with respect to any log-concave distribution (for any constant accuracy parameter). This result was not known even for the case of learning the intersection of two halfspaces without noise. Additionally, we show that in the "smoothed-analysis" setting, the above results hold with respect to distributions that have sub-exponential tails, a property satisfied by many natural and well-studied distributions in machine learning. Given that our algorithms can be implemented using Support Vector Machines (SVMs) with a polynomial kernel, these results give a rigorous theoretical explanation as to why many kernel methods work so well in practice
    corecore