2,380 research outputs found

    SGML-based publishing

    Get PDF
    None provided

    Nominal Logic Programming

    Full text link
    Nominal logic is an extension of first-order logic which provides a simple foundation for formalizing and reasoning about abstract syntax modulo consistent renaming of bound names (that is, alpha-equivalence). This article investigates logic programming based on nominal logic. We describe some typical nominal logic programs, and develop the model-theoretic, proof-theoretic, and operational semantics of such programs. Besides being of interest for ensuring the correct behavior of implementations, these results provide a rigorous foundation for techniques for analysis and reasoning about nominal logic programs, as we illustrate via examples.Comment: 46 pages; 19 page appendix; 13 figures. Revised journal submission as of July 23, 200

    Fitting theories of nuclear binding energies

    Full text link
    In developing theories of nuclear binding energy such as density-functional theory, the effort required to make a fit can be daunting due to the large number of parameters that may be in the theory and the large number of nuclei in the mass table. For theories based on the Skyrme interaction, the effort can be reduced considerably by using the singular value decomposition to reduce the size of the parameter space. We find that the sensitive parameters define a space of dimension four or so, and within this space a linear refit is adequate for a number of Skyrme parameters sets from the literature. We do not find marked differences in the quality of the fit between the SLy4, the Bky4 and SkP parameter sets. The r.m.s. residual error in even-even nuclei is about 1.5 MeV, half the value of the liquid drop model. We also discuss an alternative norm for evaluating mass fits, the Chebyshev norm. It focuses attention on the cases with the largest discrepancies between theory and experiment. We show how it works with the liquid drop model and make some applications to models based on Skyrme energy functionals. The Chebyshev norm seems to be more sensitive to new experimental data than the root-mean-square norm. The method also has the advantage that candidate improvements to the theories can be assessed with computations on smaller sets of nuclei.Comment: 17 pages and 4 figures--version encorporates referee's comment

    On Generalization Bounds for Deep Compound Gaussian Neural Networks

    Full text link
    Algorithm unfolding or unrolling is the technique of constructing a deep neural network (DNN) from an iterative algorithm. Unrolled DNNs often provide better interpretability and superior empirical performance over standard DNNs in signal estimation tasks. An important theoretical question, which has only recently received attention, is the development of generalization error bounds for unrolled DNNs. These bounds deliver theoretical and practical insights into the performance of a DNN on empirical datasets that are distinct from, but sampled from, the probability density generating the DNN training data. In this paper, we develop novel generalization error bounds for a class of unrolled DNNs that are informed by a compound Gaussian prior. These compound Gaussian networks have been shown to outperform comparative standard and unfolded deep neural networks in compressive sensing and tomographic imaging problems. The generalization error bound is formulated by bounding the Rademacher complexity of the class of compound Gaussian network estimates with Dudley's integral. Under realistic conditions, we show that, at worst, the generalization error scales O(nln(n))\mathcal{O}(n\sqrt{\ln(n)}) in the signal dimension and O((\mathcal{O}((Network Size)3/2))^{3/2}) in network size.Comment: 14 pages, 1 figur

    A Compound Gaussian Network for Solving Linear Inverse Problems

    Full text link
    For solving linear inverse problems, particularly of the type that appear in tomographic imaging and compressive sensing, this paper develops two new approaches. The first approach is an iterative algorithm that minimizers a regularized least squares objective function where the regularization is based on a compound Gaussian prior distribution. The Compound Gaussian prior subsumes many of the commonly used priors in image reconstruction, including those of sparsity-based approaches. The developed iterative algorithm gives rise to the paper's second new approach, which is a deep neural network that corresponds to an "unrolling" or "unfolding" of the iterative algorithm. Unrolled deep neural networks have interpretable layers and outperform standard deep learning methods. This paper includes a detailed computational theory that provides insight into the construction and performance of both algorithms. The conclusion is that both algorithms outperform other state-of-the-art approaches to tomographic image formation and compressive sensing, especially in the difficult regime of low training.Comment: 13 pages, 7 figures, 5 tables; references update

    Tropical range extension for the temperate, endemic South-Eastern Australian Nudibranch Goniobranchus splendidus (Angas, 1864)

    Get PDF
    In contrast to many tropical animals expanding southwards on the Australian coast concomitant with climate change, here we report a temperate endemic newly found in the tropics. Chromodorid nudibranchs are bright, colourful animals that rarely go unnoticed by divers and underwater photographers. The discovery of a new population, with divergent colouration is therefore significant. DNA sequencing confirms that despite departures from the known phenotypic variation, the specimen represents northern Goniobranchus splendidus and not an unknown close relative. Goniobranchus tinctorius represents the sister taxa to G. splendidus. With regard to secondary defences, the oxygenated terpenes found previously in this specimen are partially unique but also overlap with other G. splendidus from southern Queensland (QLD) and New South Wales (NSW). The tropical specimen from Mackay contains extracapsular yolk like other G. splendidus. This previously unknown tropical population may contribute selectively advantageous genes to cold-water species threatened by climate change. Competitive exclusion may explain why G. splendidus does not strongly overlap with its widespread sister taxon

    Super-resolution, Extremal Functions and the Condition Number of Vandermonde Matrices

    Get PDF
    Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (mm) and the separation (Δ\Delta). If m>1/Δ+1m > 1/\Delta + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m<(1ϵ)/Δm < (1-\epsilon) /\Delta no estimator can distinguish between a particular pair of Δ\Delta-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between {\em extremal functions} and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing.Comment: 19 page
    corecore