832,153 research outputs found

    Statistical Learning of Arbitrary Computable Classifiers

    Get PDF
    Statistical learning theory chiefly studies restricted hypothesis classes, particularly those with finite Vapnik-Chervonenkis (VC) dimension. The fundamental quantity of interest is the sample complexity: the number of samples required to learn to a specified level of accuracy. Here we consider learning over the set of all computable labeling functions. Since the VC-dimension is infinite and a priori (uniform) bounds on the number of samples are impossible, we let the learning algorithm decide when it has seen sufficient samples to have learned. We first show that learning in this setting is indeed possible, and develop a learning algorithm. We then show, however, that bounding sample complexity independently of the distribution is impossible. Notably, this impossibility is entirely due to the requirement that the learning algorithm be computable, and not due to the statistical nature of the problem.Comment: Expanded the section on prior work and added reference

    Statistical learnability of nuclear masses

    Full text link
    After more than 80 years from the seminal work of Weizs\"acker and the liquid drop model of the atomic nucleus, deviations from experiments of mass models (∌\sim MeV) are orders of magnitude larger than experimental errors (â‰Č\lesssim keV). Predicting the mass of atomic nuclei with precision is extremely challenging. This is due to the non--trivial many--body interplay of protons and neutrons in nuclei, and the complex nature of the nuclear strong force. Statistical theory of learning will be used to provide bounds to the prediction errors of model trained with a finite data set. These bounds are validated with neural network calculations, and compared with state of the art mass models. Therefore, it will be argued that the nuclear structure models investigating ground state properties explore a system on the limit of the knowledgeable, as defined by the statistical theory of learning

    Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution

    Full text link
    Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit regularization, gradient descent enforces proper regularization implicitly under various statistical models. In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This "implicit regularization" feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial computational savings. Focusing on three fundamental statistical estimation problems, i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. In particular, by marrying statistical modeling with generic optimization theory, we develop a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal error control --- measured entrywise and by the spectral norm --- which might be of independent interest.Comment: accepted to Foundations of Computational Mathematics (FOCM

    The Impact of Real Options on Willingness to Pay for Investments in Road Safety

    Get PDF
    Abstract: Public investments are dynamic in nature, and decision making must account for the uncertainty, irreversibility and potential for future learning. In this paper we adapt the theory for investment under uncertainty for a public referendum setting and perform the first empirical test to show that estimates of the value of safety (VSL) from stated preference surveys are highly dependent on the inclusion of the option value. Our results indicate an option value of a major economic magnitude. This implies that previously reported VSL estimates are exaggerated.Value of a Statistical Life; Real Options; Contingent Valuation; Road Safety

    Accelerating The Use Of Weblogs As An Alternative Method To Deliver Case-Based Learning

    Get PDF
    Weblog technology is an alternative medium to deliver the case-based method of learning business concepts. The social nature of this technology can potentially promote active learning and enhance analytical ability of students. The present research investigates the primary factors contributing to the adoption of Weblog technology by students to learn business cases. A theoretical framework is proposed to address this issue based on the Unified Theory of Acceptance and Use of Technology (UTAUT) theory. Statistical evidences show that three major factors can contribute to users' intention to adopt Weblogs: (a) performance expectancy, (b) effort expectancy, and (c)social influence. It is also found that behavioral intention is a significant antecedent to actual use of Weblogs to learn business cases. Implications of the results for educators as well as possible future research paths for researchers are also discusse
    • 

    corecore