239,987 research outputs found

    Voyager imaging science investigations

    Get PDF
    The Voyager imaging science experiment objectives at Saturn include exploratory reconnaissance of Saturn, its satellites and its rings. The imaging cameras are described, along with an abbreviated discussion of specific objectives

    The Mariner Jupiter/Saturn visual imaging system

    Get PDF
    Attempts were made to determine the spatial distribution of materials and brightness temperature of Saturn's rings using observations made by the visual imaging system of the Mariner Jupiter/Saturn project

    The D and E rings of Saturn

    Get PDF
    CCD observations of Saturn ring D, discovered by Guerin in 1969, confirm the existence of this inner ring and indicate that its surface brightness ranges from 0.03 (inner edge) to 0.05 (outer edge) relative to the maximum surface brightness of ring B. If ring D is composed of spherical, diffusely reflecting particles with average surface reflectivity equal to that of the particles in ring B, the average normal optical thickness of ring D is 0.02. Reanalysis of a photograph taken by Feibelman during the 1966 ring plane passage suggests a normal optical thickness for ring E between -1 million and 10 to the minus 7 power, depending upon the average reflectivity of the particles. No new observations of this outer ring will be possible until the earth passes through the Saturn ring plane in 1979-80

    Optical transmittance of fused silica at elevated temperatures during high energy electron bombardment

    Get PDF
    Optical transmittance of fused silica at elevated temperatures during high energy electron bombardmen

    Drawing Boundaries

    Get PDF
    In “On Drawing Lines on a Map” (1995), I suggested that the different ways we have of drawing lines on maps open up a new perspective on ontology, resting on a distinction between two sorts of boundaries: fiat and bona fide. “Fiat” means, roughly: human-demarcation-induced. “Bona fide” means, again roughly: a boundary constituted by some real physical discontinuity. I presented a general typology of boundaries based on this opposition and showed how it generates a corresponding typology of the different sorts of objects which boundaries determine or demarcate. In this paper, I describe how the theory of fiat boundaries has evolved since 1995, how it has been applied in areas such as property law and political geography, and how it is being used in contemporary work in formal and applied ontology, especially within the framework of Basic Formal Ontology

    Viterbi Training for PCFGs: Hardness Results and Competitiveness of Uniform Initialization

    Get PDF
    We consider the search for a maximum likelihood assignment of hidden derivations and grammar weights for a probabilistic context-free grammar, the problem approximately solved by “Viterbi training.” We show that solving and even approximating Viterbi training for PCFGs is NP-hard. We motivate the use of uniformat-random initialization for Viterbi EM as an optimal initializer in absence of further information about the correct model parameters, providing an approximate bound on the log-likelihood.

    Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning

    Get PDF
    Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk. Learning from data is central to contemporary computational linguistics. It is in common in such learning to estimate a model in a parametric family using the maximum likelihood principle. This principle applies in the supervised case (i.e., using annotate

    Discrete Logarithms in Generalized Jacobians

    Get PDF
    D\'ech\`ene has proposed generalized Jacobians as a source of groups for public-key cryptosystems based on the hardness of the Discrete Logarithm Problem (DLP). Her specific proposal gives rise to a group isomorphic to the semidirect product of an elliptic curve and a multiplicative group of a finite field. We explain why her proposal has no advantages over simply taking the direct product of groups. We then argue that generalized Jacobians offer poorer security and efficiency than standard Jacobians

    Empirical Risk Minimization with Approximations of Probabilistic Grammars

    Get PDF
    Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of the parameters of a fixed probabilistic grammar using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting.

    Joint Morphological and Syntactic Disambiguation

    Get PDF
    In morphologically rich languages, should morphological and syntactic disambiguation be treated sequentially or as a single problem? We describe several efficient, probabilistically interpretable ways to apply joint inference to morphological and syntactic disambiguation using lattice parsing. Joint inference is shown to compare favorably to pipeline parsing methods across a variety of component models. State-of-the-art performance on Hebrew Treebank parsing is demonstrated using the new method. The benefits of joint inference are modest with the current component models, but appear to increase as components themselves improve
    corecore