2,740 research outputs found

    No Need for Dark Matter in Galaxies?

    Full text link
    Unhappily, there has been a maelstrom of problems for dark matter theories over the last few years and many serious difficulties still have no resolution in sight. This article reviews the evidence for dark matter in galaxies. The haloes built up by hierarchical merging in dark matter cosmogonies are cusped and dominated by dark matter at the center. Evidence from the microlensing optical depth towards Baade's Window and from dynamical modelling of the Galactic bar already suggests that the Galactic halo is not cusped. Similarly, evidence from the stability of unbarred disk galaxies, as well as the survival of fast bars in barred galaxies, suggests that the this result holds good more generally. Judged on the data from galactic scales alone, the case for dark matter is weak and non-standard theories of gravity provide a better description. Of course, non-standard theories of gravity have their own problems, but not on galactic scales.Comment: 8 pages, invited review for "IDM 2000: Third International Workshop on the Identification of Dark Matter", ed. N. Spooner (World Scientific

    Networks and trust: systems for understanding and supporting internet security

    Get PDF
    Includes bibliographical references.2022 Fall.This dissertation takes a systems-level view of the multitude of existing trust management systems to make sense of when, where and how (or, in some cases, if) each is best utilized. Trust is a belief by one person that by transacting with another person (or organization) within a specific context, a positive outcome will result. Trust serves as a heuristic that enables us to simplify the dozens decisions we make each day about whom we will transact with. In today's hyperconnected world, in which for many people a bulk of their daily transactions related to business, entertainment, news, and even critical services like healthcare take place online, we tend to rely even more on heuristics like trust to help us simplify complex decisions. Thus, trust plays a critical role in online transactions. For this reason, over the past several decades researchers have developed a plethora of trust metrics and trust management systems for use in online systems. These systems have been most frequently applied to improve recommender systems and reputation systems. They have been designed for and applied to varied online systems including peer-to-peer (P2P) filesharing networks, e-commerce platforms, online social networks, messaging and communication networks, sensor networks, distributed computing networks, and others. However, comparatively little research has examined the effects on individuals, organizations or society of the presence or absence of trust in online sociotechnical systems. Using these existing trust metrics and trust management systems, we design a set of experiments to benchmark the performance of these existing systems, which rely heavily on network analysis methods. Drawing on the experiments' results, we propose a heuristic decision-making framework for selecting a trust management system for use in online systems. In this dissertation we also investigate several related but distinct aspects of trust in online sociotechnical systems. Using network/graph analysis methods, we examine how trust (or lack of trust) affects the performance of online networks in terms of security and quality of service. We explore the structure and behavior of online networks including Twitter, GitHub, and Reddit through the lens of trust. We find that higher levels of trust within a network are associated with more spread of misinformation (a form of cybersecurity threat, according to the US CISA) on Twitter. We also find that higher levels of trust in open source developer networks on GitHub are associated with more frequent incidences of cybersecurity vulnerabilities. Using our experimental and empirical findings previously described, we apply the Systems Engineering Process to design and prototype a trust management tool for use on Reddit, which we dub Coni the Trust Moderating Bot. Coni is, to the best of our knowledge, the first trust management tool designed specifically for use on the Reddit platform. Through our work with Coni, we develop and present a blueprint for constructing a Reddit trust tool which not only measures trust levels, but can use these trust levels to take actions on Reddit to improve the quality of submissions within the community (a subreddit)

    Generating non-conspiratorial executions

    Get PDF
    Avoiding conspiratorial executions is useful for debugging, model checking or refinement, and helps implement several wellknown problems in faulty environments; furthermore, avoiding non-equivalence robust executions prevents conflicting observations in a distributed setting from occurring. Our results prove that scheduling pairs of states and transitions in a strongly fair manner suf-fices to prevent conspiratorial executions; we then establish a formal connection between conspiracies and equivalence robustness; finally, we present a transformation scheme to implement our results and show how to build them into a well-known distributed scheduler. Previous results were applicable to a subset of systems only, just attempted to characterise potential conspiracies, or were tightly bound up with a particular interaction model.Comisión Interministerial de Ciencia y Tecnología TIC2003-02737-C0

    Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show: Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}. Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP. Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0

    Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size ≤ s(n). We show: Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1 and ε > 0 the class C[n k ] can be learned to high accuracy in time O(2n ε ). There is ε > 0 such that C[2n ε ] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n) O(1) . Equivalences between Learning Models. We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n k ] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n k ]. If for some ε > 0 there are P-natural proofs useful against C[2n ε ], then ZPEXP * C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆ i.o.Circuit[n k ], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3 ], then ZPEXP ⊆ i.o.ESUBEXP. Hardness Results for MCSP. All functions in non-uniform NC1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC0 circuits. In particular, if MCSP ∈ TC0 then NC1 = TC0

    Fairness in systems based on multiparty interactions

    Get PDF
    In the context of the Multiparty Interaction Model, fairness is used to insure that an interaction that is enabled sufficiently often in a concurrent program will eventually be selected for execution. Unfortunately, this notion does not take conspiracies into account, i.e. situations in which an interaction never becomes enabled because of an unfortunate interleaving of independent actions; furthermore, eventual execution is usually too weak for practical purposes since this concept can only be used in the context of infinite executions. In this article, we present a new fairness notion, k-conspiracy-free fairness, that improves on others because it takes finite executions into account, alleviates conspiracies that are not inherent to a program, and k may be set a priori to control its goodness to address the above-mentioned problems.Ministerio de Ciencia y Tecnología TIC-2000-1106-C02-01Ministerio de Ciencia y Tecnología FIT-150100-2001-78Ministerio de Ciencia y Tecnología TAMANSI PCB-02-00
    • …
    corecore