2,118 research outputs found
Blueprint for fault-tolerant quantum computation with Rydberg atoms
We present a blueprint for building a fault-tolerant universal quantum computer with Rydberg atoms. Our scheme, which is based on the surface code, uses individually addressable, optically trapped atoms as qubits and exploits electromagnetically induced transparency to perform the multiqubit gates required for error correction and computation. We discuss the advantages and challenges of using Rydberg atoms to build such a quantum computer, and we perform error correction simulations to obtain an error threshold for our scheme. Our findings suggest that Rydberg atoms are a promising candidate for quantum computation, but gate fidelities need to improve before fault-tolerant universal quantum computation can be achieved
Monitoring Model Deterioration with Explainable Uncertainty Estimation via Non-parametric Bootstrap
Monitoring machine learning models once they are deployed is challenging. It
is even more challenging to decide when to retrain models in real-case
scenarios when labeled data is beyond reach, and monitoring performance metrics
becomes unfeasible. In this work, we use non-parametric bootstrapped
uncertainty estimates and SHAP values to provide explainable uncertainty
estimation as a technique that aims to monitor the deterioration of machine
learning models in deployment environments, as well as determine the source of
model deterioration when target labels are not available. Classical methods are
purely aimed at detecting distribution shift, which can lead to false positives
in the sense that the model has not deteriorated despite a shift in the data
distribution. To estimate model uncertainty we construct prediction intervals
using a novel bootstrap method, which improves upon the work of Kumar &
Srivastava (2012). We show that both our model deterioration detection system
as well as our uncertainty estimation method achieve better performance than
the current state-of-the-art. Finally, we use explainable AI techniques to gain
an understanding of the drivers of model deterioration. We release an open
source Python package, doubt, which implements our proposed methods, as well as
the code used to reproduce our experiments.Comment: 7+6 pages. Accepted at AAAI'23 Safe and Robust AI trac
MuMiN:A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset
Misinformation is becoming increasingly prevalent on social media and in news
articles. It has become so widespread that we require algorithmic assistance
utilising machine learning to detect such content. Training these machine
learning models require datasets of sufficient scale, diversity and quality.
However, datasets in the field of automatic misinformation detection are
predominantly monolingual, include a limited amount of modalities and are not
of sufficient scale and quality. Addressing this, we develop a data collection
and linking system (MuMiN-trawl), to build a public misinformation graph
dataset (MuMiN), containing rich social media data (tweets, replies, users,
images, articles, hashtags) spanning 21 million tweets belonging to 26 thousand
Twitter threads, each of which have been semantically linked to 13 thousand
fact-checked claims across dozens of topics, events and domains, in 41
different languages, spanning more than a decade. The dataset is made available
as a heterogeneous graph via a Python package (mumin). We provide baseline
results for two node classification tasks related to the veracity of a claim
involving social media, and demonstrate that these are challenging tasks, with
the highest macro-average F1-score being 62.55% and 61.45% for the two tasks,
respectively. The MuMiN ecosystem is available at
https://mumin-dataset.github.io/, including the data, documentation, tutorials
and leaderboards.Comment: 9+3 page
- …