6,670 research outputs found

    Computer Science and Metaphysics: A Cross-Fertilization

    Full text link
    Computational philosophy is the use of mechanized computational techniques to unearth philosophical insights that are either difficult or impossible to find using traditional philosophical methods. Computational metaphysics is computational philosophy with a focus on metaphysics. In this paper, we (a) develop results in modal metaphysics whose discovery was computer assisted, and (b) conclude that these results work not only to the obvious benefit of philosophy but also, less obviously, to the benefit of computer science, since the new computational techniques that led to these results may be more broadly applicable within computer science. The paper includes a description of our background methodology and how it evolved, and a discussion of our new results.Comment: 39 pages, 3 figure

    Limits to Non-Malleability

    Get PDF
    There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: When can we rule out the existence of a non-malleable code for a tampering class ?? First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes: - Functions that change d/2 symbols, where d is the distance of the code; - Functions where each input symbol affects only a single output symbol; - Functions where each of the n output bits is a function of n-log n input bits. Furthermore, we rule out constructions of non-malleable codes for certain classes ? via reductions to the assumption that a distributional problem is hard for ?, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ? NC

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Photo-astrometric distances, extinctions, and astrophysical parameters for Gaia DR2 stars brighter than G = 18

    Get PDF
    Combining the precise parallaxes and optical photometry delivered by Gaia's second data release (Gaia DR2) with the photometric catalogues of PanSTARRS-1, 2MASS, and AllWISE, we derive Bayesian stellar parameters, distances, and extinctions for 265 million stars brighter than G=18. Because of the wide wavelength range used, our results substantially improve the accuracy and precision of previous extinction and effective temperature estimates. After cleaning our results for both unreliable input and output data, we retain 137 million stars, for which we achieve a median precision of 5% in distance, 0.20 mag in V-band extinction, and 245 K in effective temperature for G<14, degrading towards fainter magnitudes (12%, 0.20 mag, and 245 K at G=16; 16%, 0.23 mag, and 260 K at G=17, respectively). We find a very good agreement with the asteroseismic surface gravities and distances of 7000 stars in the Kepler, the K2-C3, and the K2-C6 fields, with stellar parameters from the APOGEE survey, as well as with distances to star clusters. Our results are available through the ADQL query interface of the Gaia mirror at the Leibniz-Institut f\"{u}r Astrophysik Potsdam (gaia.aip.de) and as binary tables at data.aip.de. As a first application, in this paper we provide distance- and extinction-corrected colour-magnitude diagrams, extinction maps as a function of distance, and extensive density maps, demonstrating the potential of our value-added dataset for mapping the three-dimensional structure of our Galaxy. In particular, we see a clear manifestation of the Galactic bar in the stellar density distributions, an observation that can almost be considered a direct imaging of the Galactic bar.Comment: 25 pages, 23 figures + appendix, accepted for publication in A&A. Data (doi:10.17876/gaia/dr.2/51) are available through ADQL queries at gaia.aip.d

    Approximating Cumulative Pebbling Cost Is Unique Games Hard

    Get PDF
    The cumulative pebbling complexity of a directed acyclic graph GG is defined as cc(G)=min⁥P∑i∣Pi∣\mathsf{cc}(G) = \min_P \sum_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of GG and ∣Pi∣|P_i| denotes the number of pebbles on the graph during round ii. Intuitively, cc(G)\mathsf{cc}(G) captures the amortized Space-Time complexity of pebbling mm copies of GG in parallel. The cumulative pebbling complexity of a graph GG is of particular interest in the field of cryptography as cc(G)\mathsf{cc}(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) fG,Hf_{G,H} [AS15] defined using a constant indegree directed acyclic graph (DAG) GG and a random oracle H(⋅)H(\cdot). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find xx such that fG,H(x)=hf_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF fG,Hf_{G,H}, it is crucial to estimate the value cc(G)\mathsf{cc}(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP\mathsf{NP}-Hard to compute cc(G)\mathsf{cc}(G), but their techniques do not even rule out an efficient (1+Δ)(1+\varepsilon)-approximation algorithm for any constant Δ>0\varepsilon>0. We show that for any constant c>0c > 0, it is Unique Games hard to approximate cc(G)\mathsf{cc}(G) to within a factor of cc. (See the paper for the full abstract.)Comment: 28 pages, updated figures and corrected typo

    A Verified and Compositional Translation of LTL to Deterministic Rabin Automata

    Get PDF
    We present a formalisation of the unified translation approach from linear temporal logic (LTL) to omega-automata from [Javier Esparza et al., 2018]. This approach decomposes LTL formulas into "simple" languages and allows a clear separation of concerns: first, we formalise the purely logical result yielding this decomposition; second, we develop a generic, executable, and expressive automata library providing necessary operations on automata to re-combine the "simple" languages; third, we instantiate this generic theory to obtain a construction for deterministic Rabin automata (DRA). We extract from this particular instantiation an executable tool translating LTL to DRAs. To the best of our knowledge this is the first verified translation of LTL to DRAs that is proven to be double-exponential in the worst case which asymptotically matches the known lower bound

    An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata

    Full text link
    In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form ⋀i=1nGFφi√FGψi\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee \mathbf{F}\mathbf{G} \psi_i, where φi\varphi_i and ψi\psi_i contain only past operators. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for LTL. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free regular expressions and back to formulas. We improve on both points. We present a direct and purely syntactic normalisation procedure for LTL yielding a normal form, comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up. As an application, we derive a simple algorithm to translate LTL into deterministic Rabin automata. The algorithm normalises the formula, translates it into a special very weak alternating automaton, and applies a simple determinisation procedure, valid only for these special automata.Comment: This is the extended version of the referenced conference paper and contains an appendix with additional materia
    • 

    corecore