2,740 research outputs found
No Need for Dark Matter in Galaxies?
Unhappily, there has been a maelstrom of problems for dark matter theories
over the last few years and many serious difficulties still have no resolution
in sight. This article reviews the evidence for dark matter in galaxies. The
haloes built up by hierarchical merging in dark matter cosmogonies are cusped
and dominated by dark matter at the center. Evidence from the microlensing
optical depth towards Baade's Window and from dynamical modelling of the
Galactic bar already suggests that the Galactic halo is not cusped. Similarly,
evidence from the stability of unbarred disk galaxies, as well as the survival
of fast bars in barred galaxies, suggests that the this result holds good more
generally. Judged on the data from galactic scales alone, the case for dark
matter is weak and non-standard theories of gravity provide a better
description. Of course, non-standard theories of gravity have their own
problems, but not on galactic scales.Comment: 8 pages, invited review for "IDM 2000: Third International Workshop
on the Identification of Dark Matter", ed. N. Spooner (World Scientific
Networks and trust: systems for understanding and supporting internet security
Includes bibliographical references.2022 Fall.This dissertation takes a systems-level view of the multitude of existing trust management systems to make sense of when, where and how (or, in some cases, if) each is best utilized. Trust is a belief by one person that by transacting with another person (or organization) within a specific context, a positive outcome will result. Trust serves as a heuristic that enables us to simplify the dozens decisions we make each day about whom we will transact with. In today's hyperconnected world, in which for many people a bulk of their daily transactions related to business, entertainment, news, and even critical services like healthcare take place online, we tend to rely even more on heuristics like trust to help us simplify complex decisions. Thus, trust plays a critical role in online transactions. For this reason, over the past several decades researchers have developed a plethora of trust metrics and trust management systems for use in online systems. These systems have been most frequently applied to improve recommender systems and reputation systems. They have been designed for and applied to varied online systems including peer-to-peer (P2P) filesharing networks, e-commerce platforms, online social networks, messaging and communication networks, sensor networks, distributed computing networks, and others. However, comparatively little research has examined the effects on individuals, organizations or society of the presence or absence of trust in online sociotechnical systems. Using these existing trust metrics and trust management systems, we design a set of experiments to benchmark the performance of these existing systems, which rely heavily on network analysis methods. Drawing on the experiments' results, we propose a heuristic decision-making framework for selecting a trust management system for use in online systems. In this dissertation we also investigate several related but distinct aspects of trust in online sociotechnical systems. Using network/graph analysis methods, we examine how trust (or lack of trust) affects the performance of online networks in terms of security and quality of service. We explore the structure and behavior of online networks including Twitter, GitHub, and Reddit through the lens of trust. We find that higher levels of trust within a network are associated with more spread of misinformation (a form of cybersecurity threat, according to the US CISA) on Twitter. We also find that higher levels of trust in open source developer networks on GitHub are associated with more frequent incidences of cybersecurity vulnerabilities. Using our experimental and empirical findings previously described, we apply the Systems Engineering Process to design and prototype a trust management tool for use on Reddit, which we dub Coni the Trust Moderating Bot. Coni is, to the best of our knowledge, the first trust management tool designed specifically for use on the Reddit platform. Through our work with Coni, we develop and present a blueprint for constructing a Reddit trust tool which not only measures trust levels, but can use these trust levels to take actions on Reddit to improve the quality of submissions within the community (a subreddit)
Generating non-conspiratorial executions
Avoiding conspiratorial executions is useful for debugging, model checking or refinement, and helps implement several wellknown
problems in faulty environments; furthermore, avoiding non-equivalence robust executions prevents conflicting
observations in a distributed setting from occurring. Our results prove that scheduling pairs of states and transitions in a strongly
fair manner suf-fices to prevent conspiratorial executions; we then establish a formal connection between conspiracies and
equivalence robustness; finally, we present a transformation scheme to implement our results and show how to build them into a
well-known distributed scheduler. Previous results were applicable to a subset of systems only, just attempted to characterise
potential conspiracies, or were tightly bound up with a particular interaction model.Comisión Interministerial de Ciencia y TecnologÃa TIC2003-02737-C0
Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness
We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show:
Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}.
Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression.
A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)].
Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)].
Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP.
Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0
Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness
We prove several results giving new and stronger connections between learning theory, circuit
complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)]
denote n-variable C-circuits of size ≤ s(n). We show:
Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the
uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1
and ε > 0 the class C[n
k
] can be learned to high accuracy in time O(2n
ε
). There is ε > 0 such that
C[2n
ε
] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n)
O(1)
.
Equivalences between Learning Models. We use learning speedups to obtain equivalences
between various randomized learning and compression models, including sub-exponential
time learning with membership queries, sub-exponential time learning with membership and
equivalence queries, probabilistic function compression and probabilistic average-case function
compression.
A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting,
there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure
pseudorandom functions computable in C[poly(n)].
Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n
k
] admits a
randomized weak learning algorithm with membership queries under the uniform distribution
that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n
k
]. If for some ε > 0 there
are P-natural proofs useful against C[2n
ε
], then ZPEXP * C[poly(n)].
Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆
i.o.Circuit[n
k
], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3
], then ZPEXP ⊆
i.o.ESUBEXP.
Hardness Results for MCSP. All functions in non-uniform NC1
reduce to the Minimum
Circuit Size Problem via truth-table reductions computable by TC0
circuits. In particular, if
MCSP ∈ TC0
then NC1 = TC0
Fairness in systems based on multiparty interactions
In the context of the Multiparty Interaction Model, fairness is used to insure that an interaction that is
enabled sufficiently often in a concurrent program will eventually be selected for execution. Unfortunately,
this notion does not take conspiracies into account, i.e. situations in which an interaction never becomes
enabled because of an unfortunate interleaving of independent actions; furthermore, eventual execution is
usually too weak for practical purposes since this concept can only be used in the context of infinite
executions. In this article, we present a new fairness notion, k-conspiracy-free fairness, that improves on
others because it takes finite executions into account, alleviates conspiracies that are not inherent to a
program, and k may be set a priori to control its goodness to address the above-mentioned problems.Ministerio de Ciencia y TecnologÃa TIC-2000-1106-C02-01Ministerio de Ciencia y TecnologÃa FIT-150100-2001-78Ministerio de Ciencia y TecnologÃa TAMANSI PCB-02-00
Recommended from our members
Conspiracy in the Time of Corona: Automatic detection of Emerging Covid-19 Conspiracy Theories in Social Media and the News
Abstract
Rumors and conspiracy theories thrive in environments of low confi- dence and low trust. Consequently, it is not surprising that ones related to the Covid-19 pandemic are proliferating given the lack of scientific consensus on the virus’s spread and containment, or on the long term social and economic ramifications of the pandemic. Among the stories currently circulating are ones suggesting that the 5G telecommunication network activates the virus, that the pandemic is a hoax perpetrated by a global cabal, that the virus is a bio-weapon released deliberately by the Chinese, or that Bill Gates is using it as cover to launch a broad vaccination program to facilitate a global surveillance regime. While some may be quick to dismiss these stories as having little impact on real-world behavior, recent events including the destruction of cell phone towers, racially fueled attacks against Asian Americans, demonstrations espousing resistance to public health orders, and wide-scale defiance of scientifically sound public mandates such as those to wear masks and practice social distancing, countermand such conclusions. Inspired by narrative theory, we crawl social media sites and news reports and, through the application of automated machine-learning methods, discover the underlying narrative frame- works supporting the generation of rumors and conspiracy theories. We show how the various narrative frameworks fueling these stories rely on the alignment of otherwise disparate domains of knowledge, and consider how they attach to the broader reporting on the pandemic. These alignments and attachments, which can be monitored in near real-time, may be useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Understanding the dynamics of storytelling on social media and the narrative frameworks that provide the generative basis for these stories may also be helpful for devising methods to disrupt their spread
- …